Internet Engineering Task Force                                Y. Bernet
Diffserv Working Group                                         Microsoft
INTERNET-DRAFT                                                  S. Blake
Expires January May 2001                                                Ericsson
draft-ietf-diffserv-model-04.txt
draft-ietf-diffserv-model-05.txt                             D. Grossman
                                                                Motorola
                                                                A. Smith
                                                                <editor>
                                                        Allegro Networks
                                                           November 2000
           An Informal Management Model for Diffserv Routers
             ***** Preliminary Authors' Review DRAFT *****

Status of this Memo

This document is an Internet-Draft and is in full conformance with all
provisions of Section 10 of RFC2026.  Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its areas, and
its working groups. Note that other groups may also distribute working
documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference material
or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft
Shadow Directories can be accessed at http://www.ietf.org/shadow.html.

This document is a product of the IETF's Differentiated Services Working
Group. working
group. Comments should be addressed to WG's mailing list at
diffserv@ietf.org. The charter for Differentiated Services may be found
at http://www.ietf.org/html.charters/diffserv-charter.html Copyright (C)
The Internet Society (2000). All Rights Reserved.

Distribution of this memo is unlimited.

Abstract

This memo document proposes an informal management model of Differentiated
Services (Diffserv) routers for use in their management and
configuration.  This model defines functional datapath elements (e.g.
classifiers, meters, actions (e.g. marking, absolute dropping, counting,
multiplexing), algorithmic droppers, queues and schedulers. It describes
possible configuration parameters for these elements and how they might
be interconnected to realize the range of traffic conditioning and per-

hop behavior (PHB) functionalities described in the Diffserv
Architecture [DSARCH].

The model is intended to be abstract and capable of representing the
configuration parameters important to Diffserv functionality for a
variety of specific router implementations. It is not intended as a
guide to system implementation nor as a formal modelling description.
This model serves as the rationale for the design of an SNMP MIB [DSMIB]
and for other configuration interfaces (e.g.  [DSPIB]) other policy-management
protocols) and, possibly, more detailed formal models (e.g.
[QOSDEVMOD]): these should all be consistent with this model.

1.  Introduction

Differentiated Services (Diffserv) [DSARCH] is a set of technologies
which allow network service providers to offer services with different
kinds of network quality-of-service (QoS) objectives to different
customers and their traffic streams. This document uses terminology
defined in [DSARCH] and other work-in-progress from the IETF's Diffserv
working group (some of these definitions are included here in Section 2
for completeness).

The premise of Diffserv networks is that routers within the core of the
network handle packets in different traffic streams by forwarding them
using different per-hop behaviors (PHBs).  The PHB to be applied is
indicated by a Diffserv codepoint (DSCP) in the IP header of each packet
[DSFIELD].  Note that this
document uses The DSCP markings are applied either by a trusted customer or
by the terminology defined in [DSARCH, DSTERMS] and in
Section 2. edge routers on entry to the Diffserv network.

The advantage of such a scheme is that many traffic streams can be
aggregated to one of a small number of behavior aggregates (BA) which
are each forwarded using the same PHB at the router, thereby simplifying
the processing and associated storage. In addition, there is no
signaling, other than what is carried in the DSCP of each packet, and no
other related processing that is required in the core of the Diffserv
network since QoS is invoked on a packet-by- packet packet-by-packet basis.

The Diffserv architecture enables a variety of possible services which
could be deployed in a network. These services are reflected to
customers at the edges of the Diffserv network in the form of a Service
Level Specification (SLS) [DSTERMS]. (SLS - see section 2). The ability to provide these
services depends on the availability of cohesive management and
configuration tools that can be used to provision and monitor a set of
Diffserv routers in a coordinated manner. To facilitate the development
of such configuration and management tools it is helpful to define a
conceptual model of a Diffserv router that abstracts away implementation

details of particular Diffserv routers from the parameters of interest
for configuration and management. The purpose of this memo document is to
define such a model.

The basic forwarding functionality of a Diffserv router is defined in
other specifications; e.g., [DSARCH, DSFIELD, AF-PHB, EF-PHB].

This document is not intended in any way to constrain or to dictate the
implementation alternatives of Diffserv routers. It is expected that
router implementers will demonstrate a great deal of variability in
their implementations. To the extent that implementers are able to model
their implementations using the abstractions described in this memo, document,
configuration and management tools will more readily be able to
configure and manage networks incorporating Diffserv routers of assorted
origins.

o    Section 3 starts by describing the basic high-level blocks of a
     Diffserv router. It explains the concepts used in the model,
     including the hierarchical management model for these blocks which
     uses low-level functional datapath elements such as Classifiers,
     Actions, Queues.

o    Section 4 describes Classifier elements.

o    Section 5 discusses Meter elements.

o    Section 6 discusses Action elements.

o    Section 7 discusses the basic queueing elements of Algorithmic
     Droppers, Queues and Schedulers and their functional behaviors
     (e.g. traffic shaping).

o    Section 8 shows how the low-level elements can be combined to build
     modules called Traffic Conditioning Blocks (TCBs) which are useful
     for management purposes.

o    Section 9 discusses security concerns.

o    Appendix A contains a brief discussion of the token bucket and
     leaky bucket algorithms used in this model and some of the
     practical effects of the use of token buckets within the Diffserv
     architecture.

2.  Glossary

This memo document uses terminology which is defined in [DSARCH] and [DSARCH]. There is
also current work-in-progress on this terminology in
[DSTERMS]. the IETF and some
of the definitions provided here are taken from that work.  Some of the
terms defined there from these other references are defined again here in order to
provide additional detail, along with some new terms specific to this
document.

   Absolute      A functional datapath element which simply discards all
   Dropper       packets arriving at its input.

   Algorithmic   A functional datapath element which selectively discards
   Dropper       packets that arrive at its input, based on a discarding
                 algorithm. It has one data input and one output.

   Classifier    A functional datapath element which consists of filters
                 which
                 that select matching and non-matching packets. Based
                 on this selection, packets and
                 forwards them are forwarded along a particular the
                 appropriate datapath within the router. A classifier classifier,
                 therefore, splits a single incoming traffic stream into
                 multiple outgoing ones. streams.

   Counter       A functional datapath element which updates a packet
                 counter and also an octet counter for every
                 packet that passes through it. Used for collecting
                 statistics.

   Datapath      A conceptual path taken by packets with particular
                 characteristics through a Diffserv router. Decisions
                 as to the path taken by a packet are made by functional
                 datapath elements such as Classifiers and Meters.

   Filter        A set of wildcard, prefix, masked, range and/or exact
                 match conditions on the content of a packet's
                 headers or other data, and/or on implicit or derived
                 attributes associated with the packet. A filter is
                 said to match only if each condition is satisfied.

   Functional    A basic building block of the conceptual router.
   Datapath      Typical elements are Classifiers, Meters, Actions,
   Element       Algorithmic Droppers, Queues and Schedulers.

   Multiplexer   A multiplexor.
   (Mux)

   Multiplexor   A functional datapath element that merges multiple
   (Mux)         traffic streams (datapaths) into a single traffic
                 stream (datapath).

   Non-work-     A property of a scheduling algorithm such that it
   conserving    services packets no sooner than a scheduled departure
                 time, even if this means leaving packets in a FIFO queued
                 while the output (e.g. a network link or connection
                 to the next element) is idle.

   Policing      The process of comparing the arrival of data packets
                 against a temporal profile and forwarding, delaying
                 or dropping them so as to make the output stream
                 conformant to the profile. Policing is modelled
                 here as the combination of either a meter or a
                 scheduler with either an absolute dropper or an
                 algorithmic dropper.

   Queueing      A combination of functional datapath elements
   Block         that modulates the transmission of packets belonging
                 to a traffic streams and determines their
                 ordering, possibly storing them temporarily or
                 discarding them.

   Scheduling    An algorithm which determines which queue of a set
   algorithm     of queues to service next. This may be based on the
                 relative priority of the queues, on a weighted fair
                 bandwidth sharing policy or some other policy. Such
                 an algorithm may be either work-conserving or non-
                 work-conserving.

   Service-Level A set of parameters and their values which together
   Specification define the service offered to a traffic stream by a
   (SLS)         Diffserv domain.

   Shaping       The process of delaying packets within a traffic stream
                 to cause it to conform to some defined temporal profile.
                 Shaping can be implemented using a queue serviced by a
                 non-work-conserving scheduling algorithm.

   Traffic       A logical datapath entity consisting of a number of
   Conditioning  functional datapath elements interconnected in
   Block (TCB)   such a way as to perform a specific set of traffic
                 conditioning functions on an incoming traffic stream.
                 A TCB can be thought of as an entity with one
                 input and one or more outputs and a set of control
                 parameters.

   Work-

   Traffic       A property set of parameters and their values which together
   Conditioning  specify a scheduling algorithm such that set of classfier rules and a traffic profile.
   Specification A TCS is an integral element of a SLS.
   (TCS)
   Work-         A property of a scheduling algorithm such that it
   conserving    services a packet, if one is available, at every
                 transmission opportunity." opportunity.

3.  Conceptual Model

This section introduces a block diagram of a Diffserv router and
describes the various components illustrated in Figure 1. Note that a
Diffserv core router is assumed likely to include require only a subset of these
components: the model presented here is intended to cover the case of
both Diffserv edge and core routers.

3.1.  Elements  Components of a Diffserv Router

The conceptual model includes abstract definitions for the following:

   o    Traffic Classification elements.

   o    Metering functions.

   o    Actions of Marking, Absolute Dropping, Counting and
        Multiplexing.

   o    Queueing elements, including capabilities of algorithmic
        dropping and scheduling.

   o    Certain combinations of the above functional datapath elements
        into higher-level blocks known as Traffic Conditioning Blocks
        (TCBs).

The components and combinations of components described in this document
form building blocks that need to be manageable by Diffserv
configuration and management tools. One of the goals of this document is
to show how a model of a Diffserv device can be built using these
component blocks. This model is in the form of a connected directed
acyclic graph (DAG) of functional datapath elements that describes the
traffic conditioning and queueing behaviors that any particular packet
will experience when forwarded to the Diffserv router. Figure 1
illustrates the major functional blocks of a Diffserv router: router.

3.1.1.  Datapath

An ingress interface, routing core and egress interface are illustrated
at the center of the diagram. In actual router implementations, there
may be an arbitrary number of ingress and egress interfaces
interconnected by the routing core. The routing core element serves as
               +---------------+
               | Diffserv      |
        Mgmt   | configuration |
      <----+-->| & management  |------------------+
      SNMP,|   | interface     |                  |
      COPS |   +---------------+                  |
      etc. |        |                             |
           |        |                             |
           |        v                             v
           |   +-------------+                 +-------------+
           |   | ingress i/f |   +---------+   | egress i/f  |
     --------->|  classify,  |-->| routing |-->|  classify,  |---->
     data  |   |  meter,     |   |  core   |   |  meter      |data out
      in   |   |  action,    |   +---------+   |  action,    |
           |   |  queueing   |                 |  queueing   |
           |   +-------------+                 +-------------+
           |        ^                             ^
           |        |                             |
           |        |                             |
           |   +------------+                     |
           +-->| QOS agent  |                     |
      -------->| (optional) |---------------------+
        QOS    | (e.g. RSVP)|
        cntl   +------------+
        msgs
              Figure 1:  Diffserv Router Major Functional Blocks

an abstraction of a router's normal routing and switching functionality.
The routing core moves packets between interfaces according to policies
outside the scope of Diffserv. Diffserv (note: it is possible that such policies
for output-interface selection might involve use of packet fields such
as the DSCP but this is outside the scope of this model).  The actual
queueing delay and packet loss behavior of a specific router's switching
fabric/backplane is not modeled by the routing core; these should be
modeled using the functional datapath elements described later. The
routing core should of this model can be thought of as an infinite bandwidth, zero- delay
zero-delay backplane connecting
ingress and interfaces - properties like the
behaviour of the core when overloaded need to be reflected back into the
queueing elements that are modelled around it e.g. when too much traffic
is directed across the core at an egress interfaces. interface, the excess must
either be dropped or queued somewhere: the elements performing these
functions must be modelled on one of the interfaces involved.

The components of interest on at the ingress/egress ingress to and egress from interfaces
are the functional datapath elements (e.g. classifiers, queueing Classifiers, Queueing
elements) that support Diffserv traffic conditioning and per-hop

behaviors [DSARCH]. These are the fundamental components comprising a
Diffserv router and
will be are the focal point of our conceptual this model.

3.1.2.  Configuration and Management Interface

Diffserv operating parameters are monitored and provisioned through this
interface. Monitored parameters include statistics regarding traffic
carried at various Diffserv service levels. These statistics may be
important for accounting purposes and/or for tracking compliance to
Traffic Conditioning Specifications (TCSs) [DSTERMS] negotiated with
               +---------------+
               | Diffserv      |
        Mgmt   | configuration |
      <----+-->| & management  |------------------+
      SNMP,|   | interface     |                  |
      COPS |   +---------------+                  |
      etc. |        |                             |
           |        |                             |
           |        v                             v
           |   +-------------+                 +-------------+
           |   | ingress i/f |   +---------+   | egress i/f  |
     --------->|  classify,  |-->| routing |-->|  classify,  |---->
     data  |   |  meter,     |   |  core   |   |  meter      |data out
      in   |   |  action,    |   +---------+   |  action,    |
           |   |  queueing   |                 |  queueing   |
           |   +-------------+                 +-------------+
           |        ^                             ^
           |        |                             |
           |        |                             |
           |   +------------+                     |
           +-->| QOS agent  |                     |
      -------->| (optional) |---------------------+
        QOS    | (e.g. RSVP)|
        cntl   +------------+
        msgs
              Figure 1:  Diffserv Router Major Functional Blocks customers.
Provisioned parameters are primarily classification rules, TC the TCS parameters for Classifiers
and Meters and the associated PHB configuration parameters. parameters for Actions
and Queueing elements. The network administrator interacts with the
Diffserv configuration and management interface via one or more
management protocols, such as SNMP or COPS, or through other router
configuration tools such as serial terminal or telnet consoles.

Specific policy rules and goals governing the Diffserv behaviour of a
router are presumed to be installed by policy management mechanisms.
However, Diffserv routers are always subject to implementation decisions limits
which
form a meta- policy that scopes scope the kinds of policies which can be successfully implemented
by the router. External reporting of such implementation capabilities are is
considered out of scope for this document.

3.1.3.  Optional QoS Agent Module

Diffserv routers may snoop or participate in either per-microflow or
per-flow-aggregate signaling of QoS requirements [E2E] e.g.  using the
RSVP protocol. Snooping of RSVP messages may be used, for example, to
learn how to classify traffic without actually participating as a RSVP
protocol peer. Diffserv routers may reject or admit RSVP reservation
requests to provide a means of admission control to Diffserv-based
services or they may use these requests to trigger provisioning changes
for a flow-aggregation in the Diffserv network. A flow-aggregation in
this context might be equivalent to a Diffserv BA or it may be more
fine-grained, relying on a MF classifier [DSARCH]. Note that the
conceptual model of such a router implements the Integrated Services
Model as described in [INTSERV], applying the control plane controls to
the data classified and conditioned in the data plane, as desribed in
[E2E].

Note that a QoS Agent component of a Diffserv router, if present, might
be active only in the control plane and not in the data plane. In this
scenario, RSVP could be used merely to signal reservation state without
installing any actual reservations in the data plane of the Diffserv
router: the data plane could still act purely on Diffserv DSCPs and
provide PHBs for handling data traffic without the normal per-microflow

handling expected to support some Intserv services.

3.2.  Hierarchical Model of  Diffserv Components Functions at Ingress and Egress

This document focuses on the Diffserv-specific components of the router.
Figure 2 shows a high-level view of ingress and egress interfaces of a
router.  The diagram illustrates two Diffserv router interfaces, each
having an a set of ingress and an a set of egress component. elements. It shows
classification, metering, action and queueing functions which might be
instantiated on at each interface's ingress and egress component. egress.

In principle, if one were to construct a network entirely out of two-
port routers (in appropriate places connected (connected by LANs or similar media), then it would might be
necessary for each router to perform four QoS control functions in the
datapath on traffic in each direction:

-    Classify each message according to some set of rules, possibly just
     a "match everything" rule.

-    If necessary, determine whether the data stream the message is part
     of is within or outside its rate by metering the stream.

-    Perform a set of resulting actions, including applying a drop
     policy appropriate to the classification and queue in question and
     perhaps additionally marking the traffic with a Differentiated
     Services Code Point (DSCP) as defined in [DSCP]. [DSFIELD].

             Interface A                        Interface B
          +-------------+     +---------+     +-------------+
          | ingress i/f ingress:    |     |         |     | egress i/f egress:     |
          |   classify, |     |         |     |   classify, |
      --->|   meter,    |---->|         |---->|   meter,    |--->
          |   action,   |     |         |     |   action,   |
          |   queueing  |     | routing |     |   queueing  |
          +-------------+     | routing |     +-------------+
                              |  core   |     +-------------+
          |         |     +-------------+
          | egress i/f egress:     |     |         |     | ingress i/f ingress:    |
          |   classify, |     |         |     |   classify, |
      <---|   meter,    |<----|         |<----|   meter,    |<---
          |   action,   |     |         |     |   action,   |
          |   queueing  |     +---------+     |   queueing  |
          +-------------+                     +-------------+

      Figure 2. Traffic Conditioning and Queueing Elements

-    Enqueue the traffic for output in the appropriate queue, which queue. The
     scheduling of output from this queue may
     either shape lead to shaping of the
     traffic or may simply forward cause it to be forwarded with some minimum
     rate or maximum latency. latency assurance.

If the network is now built out of N-port routers, the expected behavior
of the network should be identical. Therefore, this model must provide
for essentially the same set of functions on at the ingress as on the
egress port of the router. Some interfaces will be "edge" interfaces and
some will be "interior" to the Differentiated Services domain. a router's interfaces. The one point of difference in the
model between an ingress and an the egress interface is that all traffic on an at the egress
of an interface is queued, while traffic on an at the ingress to an interface will typically
is likely to be queued only for shaping purposes, if at all.  Therefore,
equivalent functional datapath elements may be modelled on at both the
ingress to and egress components of from an interface.

Note that it is not mandatory that each of these functional datapath
elements be implemented on at both ingress and egress components; egress; equally, the model
allows that multiple sets of these elements may be placed in series
and/or in parallel at ingress or at egress. The arrangement of elements
is dependent on the service requirements on a particular interface on a
particular router. By modelling these elements on at both ingress and egress components,
egress, it is not implied that they must be implemented in this way in a
specific router. For example, a router may implement all shaping and PHB
queueing on at the interface egress component or may instead implement it only on at the ingress component.
ingress. Furthermore, the classification needed to map a packet to an
egress component queue (if present) need not be implemented on at the egress component but
instead

may might be implemented on at the ingress component, ingress, with the packet passed
through the routing core with in-band control information to allow for
egress queue selection.

Specifically, some interfaces will be at the outer "edge" and some will
be towards the "core" of the Diffserv domain. It is to be expected (from
the general principles guiding the motivation of Diffserv) that "edge"
interfaces, or at least the routers that contain them, will contain more
complexity and require more configuration than those in the core.

3.3.  Shaping and Policing

Diffserv nodes may apply shaping, policing and/or marking to traffic
streams that exceed the bounds of their TCS in order to prevent one
traffic stream from seizing more than its share of resources from a
Diffserv network. In this model, Shaping, sometimes considered as a TC
action, is treated as a function of queueing elements - see section 7.
Algorithmic Dropping techniques (e.g. RED) are similarly treated since
these often are closely associated with queues.  Policing is modelled as
either a concatenation of a Meter with an Absolute Dropper or as a
concatenation of an Algorithmic Dropper with a Scheduler. These elements

will discard packets which exceed the TCS.

3.4.  Hierarchical View of the Model

>From a device-level configuration management perspective, the following
hierarchy exists:

     At the lowest level considered here, are individual functional
     datapath elements, each with their own configuration parameters and
     management counters and flags.

     At the next level, the network administrator manages groupings of
     these functional datapath elements interconnected in a DAG. These
     functional datapath elements are organized in self-contained TCBs
     which are used to implement some desired network policy (see
     Section 8). One or more TCBs may be instantiated on at each
     interface's ingress or
     egress component; egress; they may be connected in series
     and/or in parallel configurations on the multiple outputs of a
     preceding TCB.  A TCB can be thought of as a "black box" with one
     input and one or more outputs (in the data path). Each interface (ingress or egress)
     may have a different TCB configuration. configuration and each direction (ingress
     or egress) may too.

     At the topmost level considered here, the network administrator
     manages interfaces. Each interface consists of an has ingress component and an egress component.  Each component may contain
     functionality, with each of these expressed as one or more TCBs.

4.  Classifiers

4.1.  Definition

Classification
     This level of the hierarchy is performed by a classifier element. Classifiers are 1:N
(fan-out) devices: they take a single traffic stream as input and
generate N logically separate traffic what was illustrated in Figure 2.

Further levels may be built on top of this hierarchy, in particular ones
for aiding in the repetitive configuration tasks likely for routers with
many interfaces: some such "template" tools for Diffserv routers are
outside the scope of this model but are under study by other working
groups within IETF.

4.  Classifiers

4.1.  Definition

Classification is performed by a classifier element. Classifiers are 1:N
(fan-out) devices: they take a single traffic stream as input and
generate N logically separate traffic streams as output. Classifiers are
parameterized by filters and output streams. Packets from the input
stream are sorted into various output streams by filters which match the
contents of the packet or possibly match other attributes associated
with the packet.  Various types of classifiers using different filters
are described in the following sections.  Figure 3 illustrates a
classifier, where the outputs connect to succeeding functional datapath

elements.

The simplest possible Classifier element is one that matches all packets
that are applied at its input. In this case, the Classifier element is
just a no-op and may be omitted.

      unclassified              classified
      traffic                   traffic
              +------------+
              |            |--> match Filter1 --> OutputA
      ------->| classifier |--> match Filter2 --> OutputB
              |            |--> no match      --> OutputC
              +------------+

      Figure 3. An Example Classifier

Note that we allow a multiplexor Multiplexor (see Section 6.5) before the classifier Classifier
to allow input from multiple traffic streams. For example, if traffic
streams originating from multiple ingress sub-interfaces interfaces feed through a
single classifier Classifier then the interface number can could be considered by one of the classifier as a packet
attribute and be included in the packet's
classification key. keys used by the Classifier. This optimization may be
important for scalability in the management plane. Classifiers may also
be cascaded in sequence to perform more complex lookup operations whilst
still maintaining such scalability.

Another example of a packet attribute could be an integer representing
the BGP community string associated with the packet's best-matching
route. Other contextual information may also be used by a Classifier
e.g. knowledge that a particular interface faces a Diffserv domain or a
legacy IPTOS domain [DSARCH] could be used when determining whether a
DSCP is present or not.

The following classifier separates traffic into one of three output
streams based on three filters:

      Filter Matched        Output Stream
      --------------       ---------------
      Filter1                    A
      Filter2                    B
      Filter3 (no match)
      no match                   C

Where Filters1 and Filter2 are defined to be the following BA filters
([DSARCH], Section 4.2.1 ):

      unclassified              classified
      traffic                   traffic
              +------------+
              |            |--> match Filter1 --> OutputA
      ------->| classifier |--> match Filter2 --> OutputB
              |            |--> no match      --> OutputC
              +------------+

      Figure 3. An Example Classifier
      Filter        DSCP
      ------       ------
        1           101010
        2           111111
        3           ****** (wildcard)

4.1.1.  Filters

A filter consists of a set of conditions on the component values of a
packet's classification key (the header values, contents, and attributes
relevant for classification). In the BA classifier example above, the
classification key consists of one packet header field, the DSCP, and
both Filter1 and Filter2 specify exact-match conditions on the value of
the DSCP. Filter3 is a wildcard default filter which matches every
packet, but which is only selected in the event that no other more
specific filter matches.

In general there are a set of possible component conditions including
exact, prefix, range, masked and wildcard matches. Note that ranges can
be represented (with less efficiency) as a set of prefixes and that
prefix matches are just a special case of both masked and range matches.

In the case of a MF classifier [DSARCH], the classification key consists
of a number of packet header fields. The filter may specify a different
condition for each key component, as illustrated in the example below
for a IPv4/TCP classifier:

      Filter   IP Src Addr    IP Dest Addr   TCP SrcPort TCP DestPort
      ------   -------------  -------------  -----------  ------------
      Filter4  172.31.8.1/32  172.31.3.X/24       X          5003

In this example, the fourth octet of the destination IPv4 address and
the source TCP port are wildcard or "don't cares". care".

MF classification of fragmented packets is impossible if the filter uses
transport-layer port numbers e.g. TCP port numbers. MTU-discovery is
therefore a prerequisite for proper operation of a Diffserv network that
uses such classifiers.

4.1.2.  Overlapping Filters

Note that it is easy to define sets of overlapping filters in a
classifier. For example:

      Filter   IP Src Addr    IP Dest Addr   TCP SrcPort TCP DestPort
      ------   -------------  -------------  -----------  ------------
      Filter4  172.31.8.1/32  172.31.3.X/24       X          5003
      Filter5:
      Type:   Masked-DSCP
      Value:  111000
      Mask:   111000

      Filter6:
      Type:   Masked-DSCP
      Value:  000111 (binary)
      Mask:   000111 (binary)

A packet containing DSCP = 111111 cannot be uniquely classified by this
pair of filters and so a precedence must be established between Filter5
and Filter6 in order to break the tie. This precedence must be
established either (a) by a manager which knows that the router can
accomplish this particular ordering e.g. by means of reported
capabilities, or (b) by the router along with a mechanism to report to a
manager which precedence is being used. Such precedence mechanisms must
be supported in any translation of this model into specific syntax for
configuration and management protocols.

As another example, one might want first to disallow certain
applications from using the network at all, or to classify some
individual traffic streams that are not Diffserv-marked. Traffic that is
not classified by those tests might then be inspected for a DSCP. The
word "then" implies sequence and this must be specified by means of
precedence.

An unambiguous classifier requires that every possible classification
key match at least one filter (possibly the wildcard default) and that
any ambiguity between overlapping filters be resolved by precedence.
Therefore, the classifiers on any given interface must be "complete" and
will often include an "everything else" filter as the lowest precedence
element in order for the result of classification to be deterministic.
Note that this completeness is only required of the first classifier
that incoming traffic will meet as it enters an interface - subsequent
classifiers on an interface only need to handle the traffic that it is
known that they will receive.

This model of classifier operation makes the assumption that all filters
of the same precedence be applied simultaneously. Whilst convenient from
a modelling point-of-view, this may or may not be how the classifier is
actually implemented - this assumption is not intended to dictate how
the implementation actually handles this, merely to clearly define the
required end result.

4.2.  Examples

4.2.1.  Behaviour Aggregate (BA) Classifier

The simplest Diffserv classifier is a behavior aggregate (BA) classifier
[DSARCH]. A BA classifier uses only the Diffserv codepoint (DSCP) in a
packet's IP header to determine the logical output stream to which the
packet should be directed. We allow only an exact-match condition on
this field because the assigned DSCP values have no structure, and
therefore no subset of DSCP bits are significant.

The following defines a possible BA filter:

      Filter8:
      Type:   BA
      Value:  111000

4.2.2.  Multi-Field (MF) Classifier

Another type of classifier is a multi-field (MF) classifier [DSARCH].
This classifies packets based on one or more fields in the packet
(possibly including the DSCP). A common type of MF classifier is a 6-
tuple classifier that classifies based on six fields from the IP and TCP
or UDP headers (destination address, source address, IP protocol, source
port, destination port, and DSCP). MF classifiers may classify on other
fields such as MAC addresses, VLAN tags, link-layer traffic class fields
or other higher-layer protocol fields.

The following defines a possible MF filter:

      Filter9:
      Type:              IPv4-6-tuple
      IPv4DestAddrValue: 0.0.0.0
      IPv4DestAddrMask:  0.0.0.0
      IPv4SrcAddrValue:  172.31.8.0
      IPv4SrcAddrMask:   255.255.255.0
      IPv4DSCP:          28
      IPv4Protocol:      6
      IPv4DestL4PortMin: 0
      IPv4DestL4PortMax: 65535
      IPv4SrcL4PortMin:  20
      IPv4SrcL4PortMax:  20

A similar type of classifier can be defined for IPv6.

4.2.3.  Free-form Classifier

A Free-form classifier is made up of a set of user definable arbitrary
filters each made up of {bit-field size, offset (from head of packet),
mask}:

      Classifier2:
      Filter12:    OutputA
      Filter13:    OutputB
      Default:     OutputC

      Filter12:
      Type:        FreeForm
      SizeBits:    3 (bits)
      Offset:      16 (bytes)
      Value:       100 (binary)
      Mask:        101 (binary)

      Filter13:
      Type:        FreeForm
      SizeBits:    12 (bits)
      Offset:      16 (bytes)
      Value:       100100000000 (binary)
      Mask:        111111111111 (binary)

Free-form filters can be combined into filter groups to form very
powerful filters.

4.2.4.  Other Possible Classifiers

Classification may also be performed based on information at the
datalink layer below IP (e.g. VLAN or datalink-layer priority) or
perhaps on the ingress or egress IP, logical or physical interface
identifier.  (e.g. the incoming channel number on a channelized
interface).  A classifier that filters based on IEEE 802.1p Priority and
on 802.1Q VLAN-ID might be represented as:

      Classifier3:
      Filter14 AND Filter15:  OutputA
      Default:                OutputB

      Filter14:                        -- priority 4 or 5
      Type:        Ieee8021pPriority
      Value:       100 (binary)
      Mask:        110 (binary)

      Filter15:                        -- VLAN 2304
      Type:        Ieee8021QVlan
      Value:       100100000000 (binary)
      Mask:        111111111111 (binary)

Such classifiers may be the subject of other standards or may be enterprise-
specific
proprietary to a router vendor but they are not discussed further here.

5.  Meters

Metering is defined in [DSARCH].  Diffserv network providers may choose
to offer services to customers based on a temporal (i.e., rate) profile
within which the customer submits traffic for the service. In this
event, a meter might be used to trigger real-time traffic conditioning
actions (e.g., marking) by routing a non-conforming packet through an
appropriate next-stage action element. Alternatively, by counting
conforming and/or non-conforming traffic, traffic using a Counter element
downstream of the Meter, it might also be used for to help in collecting
data for out-of-band management functions such as billing applications.

Meters are logically 1:N (fan-out) devices (although a multiplexor can
be used in front of a meter). Meters are parameterized by a temporal
profile and by conformance levels, each of which is associated with a
meter's output. Each output can be connected to another functional
element.

Note that this model of a meter differs slightly from that described in
[DSARCH]. In that description the meter is not a datapath element but is
instead used to monitor the traffic stream and send control signals to
action elements to dynamically modulate their behavior based on the
conformance of the packet. Figure 4 illustrates a meter with 3 levels of
conformance.

      unmetered              metered
      traffic                traffic
                +---------+
                |         |--------> conformance A
      --------->|  meter  |--------> conformance B
                |         |--------> conformance C
                +---------+

      Figure 4. A Generic Meter

In some Diffserv examples, examples e.g. [AF-PHB], three levels of conformance are
discussed in terms of colors, with green representing conforming, yellow
representing partially conforming and red representing non-conforming [AF-PHB]. non-conforming.
These different conformance levels may be used to trigger different
queueing, marking or dropping treatment later on in the processing.
Other example meters use a binary notion of conformance; in the general
case N levels of conformance can be supported. In general there is no
constraint on the type of functional datapath element following a meter
output, but care must be taken not to inadvertently configure a datapath
that results in packet reordering that is not consistent with the
requirements of the relevant PHB
requirements. specification.

A meter, according to this model, measures the rate at which packets
making up a stream of traffic pass it, compares the rate to some set of
thresholds and produces some number of potential results (two or more) potential results: more):
a given packet is said to "conform" be "conformant" to a level of the meter if, at
the time that the packet is being looked at, examined, the stream appears to be
within the meter's rate limit rate.

      unmetered              metered
      traffic                traffic
                +---------+
                |         |--------> conformance A
      --------->|  meter  |--------> conformance B
                |         |--------> conformance C
                +---------+

      Figure 4. for the profile associated with that level. A Generic Meter

5.1.  Token-Bucket Model

The concept
fuller discussion of conformance to a meter bears comment. The concept applied
in several rate-control architectures, including ATM, Frame Relay,
Integrated Services and Differentiated Services, profiles (and the associated
requirements that this places on the schedulers upstream) is variously described
as a "leaky bucket" or a "token bucket".  Both token buckets and leaky
buckets are, by definition, theoretical relationships between provided in
Appendix A.

5.1.  Examples

The following are some
defined burst_size, rate and interval:

                       rate = burst_size/interval

Thus, examples of possible meters.

5.1.1.  Average Rate Meter

An example of a token bucket or leaky bucket might specify very simple meter is an information average rate meter. This type of 1.2 Mbps with a burst size of 1500 bytes. In this case,
meter measures the token average rate is 1,200,000 bits per second, the token burst is 12,000 bits and
the token interval is 10 milliseconds. The specification says that
conforming traffic will in the worst case come in 100 bursts per second
of 1500 bytes and at a rate not which packets are submitted to exceed 1.2 Mbps.

A leaky bucket algorithm is primarily used for traffic shaping (handled
under Queues and Schedulers in this model). Traffic theoretically
departs from a device at it
over a specified averaging time.

An average rate profile may take the following form:

      Meter1:
      Type:                AverageRate
      Profile:             Profile1
      ConformingOutput:    Queue1
      NonConformingOutput: Counter1

      Profile1:
      Type:                AverageRate
      AverageRate:         120 kbps
      Delta:               100 msec

A Meter measuring against this profile would continually maintain a
count that indicates the total number and/or cumulative byte-count of one bit every so many

packets arriving between time units (in T (now) and time T - 100 msecs. So long as
an arriving packet does not push the example, one bit every 0.83 microseconds) but, in fact, departs count over 12 kbits in
multi-bit units (packets) at a rate approximating that. In the example,
it might send one 1500 byte last 100
msec then the packet every 10 ms, or one 500 byte would be deemed conforming. Any packet
every 3.3 ms. It is also possible to build multi-rate leaky buckets, in
which traffic departs from that pushes
the switch at varying rates depending on
recent activity count over 12 kbits would be deemed non-conforming. Thus, this Meter
deems packets to correspond to one of two conformance levels: conforming
or inactivity.

Implementations generally seek as constant a transmission rate as
achievable. In theory, a 10 Mbps shaped transmission stream from an
algorithmic implementation non-conforming and a stream which is running at 10 Mbps
because its bottleneck link has been a 10 Mbps Ethernet link should be
indistinguishable. Depending sends them on configuration, for the approximation to
theoretical smoothness may vary by moving as much as an MTU from one
token interval appropriate subsequent
treatment.

5.1.2.  Exponential Weighted Moving Average (EWMA) Meter

The EWMA form of Meter is easy to another. Traffic may also implement in hardware and can be jostled by traffic
competing for
parameterized as follows:

      avg_rate(t) = (1 - Gain) * avg_rate(t') +  Gain * rate(t)
      t = t' + Delta

For a packet arriving at time t:

      if (avg_rate(t) > AverageRate)
         non-conforming
      else
         conforming

"Gain" controls the same transmission resources.

A token bucket time constant (e.g. frequency response) of what is
essentially a simple IIR low-pass filter. "rate(t)" measures the arrival rate number
of traffic from another system,
which may have originally been shaped using incoming bytes in a leaky bucket shaper or its
equivalent, small fixed sampling interval, Delta.  Any packet
that arrives and determines whether it (still) conforms to pushes the
specification.  Multi-rate token buckets (token buckets with both a peak
and average rate over a mean rate, and sometimes more rates) are commonly used, such as
described in [SRTCM] and [TRTCM]. In this case, absolute smoothness predefined rate
AverageRate is
not expected, but deemed non-conforming. An EWMA Meter profile might look
something like the following:

      Meter2:
      Type:                ExpWeightedMovingAvg
      Profile:             Profile2
      ConformingOutput:    Queue1
      NonConformingOutput: AbsoluteDropper1

      Profile2:
      Type:                ExpWeightedMovingAvg
      AverageRate:         25 kbps
      Delta:               10 usec
      Gain:                1/16

5.1.3.  Two-Parameter Token Bucket Meter

A more sophisticated Meter might measure loose conformance to one or more of the specified rates is
expected.

Simplistically, a data stream is said to conform token
bucket (TB) profile (see above and Appendix A for discussions of loose
and strict conformance to a simple token
bucket parameterised by bucket). A TB profile generally has

two parameters, an average token rate and a {rate, burst_size} if the system receives in
any time interval, t, at most, an amount of data not exceeding (rate *
t) + burst_size.

For the multi-rate token bucket case, burst size. TB Meters
compare the data stream is said to conform
if, for each arrival rate of the rates, the stream conforms packets to the token-bucket
profile appropriate for traffic of that class. For example, received
traffic that arrives pre-classified as one of average rate specified by the "excess" rates (e.g.
AF12 or AF13 traffic for
TB profile.  Logically, tokens accumulate in a device implementing bucket at the AF1x PHB) is only
compared average
rate, up to the relevant "excess" token bucket profile.

When used as a leaky bucket shaper, this definition interacts with clock
granularity maximum credit which is the burst size. Packets of length
L bytes are considered conforming if any tokens are available in ways one might not expect. A leaky the
bucket releases a
packet only when all at the time of its bits would have been allowed: it does not
borrow packet arrival: up to L bytes may then be borrowed
from future capacity. If the clock is very fine grain, on the
order of token allocations. Packets are allowed to exceed the bit average
rate or faster, this is not an issue. But if the clock
is relatively slow (and millisecond or multi-millisecond clocks are not
unusual in networking equipment), this can introduce jitter bursts up to the
shaped stream.

The fact that data is organized into variable length packets introduces
some uncertainty in this conformance decision. When used in a Scheduler, burst size. Packets which arrive to find a leaky
bucket releases a packet only when all of its bits would have
been allowed: it does not borrow from future capacity. Ideally, when
used in a Meter, a token bucket accepts a packet only if all of its bits
would have been accepted and does not borrow excess capacity required
from future capacity.  This is consistent with [SRTCM] and [TRTCM]. In
real-world deployment, where MTUs no tokens in it are often larger than the burst size
offered by a link-layer network service provider and TCP is more
commonly ACK-paced than shaped using a leaky bucket, the loose model
offers a solution to the problems deemed non-conforming. A two-parameter
TB meter has exactly two possible conformance levels (conforming, non-
conforming).  Note that arise.  For a more detailed look
at the practical issues, see Appendix A.

5.2.  Examples

The following "strict" conformance meters are some examples of possible meters.

5.2.1.  Average Rate Meter

An example of a very simple meter is an average rate meter. This type of also useful -
see e.g. [SRTCM] and [TRTCM].

A two-parameter TB meter measures the average rate at which packets are submitted to it
over a specified averaging time.

An average rate profile may take the following form:

      Meter1: might appear as follows:

      Meter3:
      Type:                AverageRate                SimpleTokenBucket
      Profile:             Profile1             Profile3
      ConformingOutput:    Queue1
      NonConformingOutput: Counter1

      Profile1: AbsoluteDropper1

      Profile3:
      Type:                AverageRate                SimpleTokenBucket
      AverageRate:         120         200 kbps
      Delta:
      BurstSize:           100 msec

A meter measuring against this profile would continually maintain a
count that indicates the total number of packets arriving between time T
(now) kbytes
      ConformanceType:     loose

5.1.4.  Multi-Stage Token Bucket Meter

More complicated TB meters might define multiple burst sizes and time T - 100 msecs. So long as an arriving packet does not
push the count over 12 kbits in more
conformance levels. Packets found to exceed the last 100 msec then larger burst size are
deemed non-conforming. Packets found to exceed the packet would
be smaller burst size
are deemed partially conforming. Any packet that pushes the count over 12 kbits
would be Packets exceeding neither are deemed non-conforming. Thus, this meter deems packets to
correspond to one of two conformance levels: conforming or non-
conforming and sends them on
conforming. Token bucket meters designed for the appropriate subsequent treatment.

5.2.2.  Exponential Weighted Moving Average (EWMA) Meter

The EWMA form Diffserv networks are
described in more detail in [SRTCM, TRTCM, GTC]; in some of meter is easy to implement these
references, three levels of conformance are discussed in hardware terms of colors
with green representing conforming, yellow representing partially
conforming and red representing non-conforming. Note that these
multiple-conformance-level meters can sometimes be
parameterized as follows:

      avg_rate(t) = (1 - Gain) * avg_rate(t') +  Gain * rate(t)
      t = t' + Delta

For a packet arriving at time t:

      if (avg_rate(t) > AverageRate)
         non-conforming
      else
         conforming

"Gain" controls the time constant (e.g. frequency response) of what is
essentially a simple IIR low-pass filter. "rate(t)" measures the number implemented using an
appropriate sequence of incoming bytes in a small fixed sampling interval, Delta.  Any packet
that arrives and pushes the average rate over multiple two-parameter TB meters.

A profile for a predefined rate
AverageRate is deemed non-conforming. An EWMA multi-stage TB meter profile with three levels of conformance
might look
something like the following:

      Meter2: as follows:

      Meter4:
      Type:                ExpWeightedMovingAvg
      Profile:             Profile2
      ConformingOutput:    Queue1
      NonConformingOutput: AbsoluteDropper1
      Profile2:                TwoRateTokenBucket
      ProfileA:            Profile4
      ConformingOutputA:   Queue1
      ProfileB:            Profile5
      ConformingOutputB:   Marker1
      NonConformingOutput: AbsoluteDropper1

      Profile4:
      Type:                ExpWeightedMovingAvg                SimpleTokenBucket
      AverageRate:         25         100 kbps
      Delta:               10 usec
      Gain:                1/16

5.2.3.  Two-Parameter Token Bucket
      BurstSize:           20 kbytes

      Profile5:
      Type:                SimpleTokenBucket
      AverageRate:         100 kbps
      BurstSize:           100 kbytes

5.1.5.  Null Meter

A more sophisticated null meter might measure conformance to a token bucket
(TB) profile. A TB profile generally has two parameters, an average
token rate only one output: always conforming, and no associated
temporal profile. Such a burst size. TB meters compare the arrival rate of
packets meter is useful to define in the average rate specified by event that the TB profile.  Logically,
tokens accumulate
configuration or management interface does not have the flexibility to
omit a meter in a bucket at the average rate, datapath segment.

      Meter5:
      Type:                NullMeter
      Output:              Queue1

6.  Action Elements

The classifiers and meters described up to a maximum
credit which is the burst size. Packets of length L bytes this point are considered
conforming if any tokens fan-out
elements which are available in the bucket at generally used to determine the time of
packet arrival: up appropriate action to L bytes may
apply to a packet. The set of possible actions that can then be borrowed from future token
allocations. Packets applied
include:

-    Marking

-    Absolute Dropping

-    Multiplexing

-    Counting

-    Null action - do nothing

The corresponding action elements are allowed to exceed the average rate described in bursts up
to the burst size. Packets following
sections.

6.1.  DSCP Marker

DSCP Markers are 1:1 elements which arrive to find set a bucket with no tokens codepoint (e.g. the DSCP in it are deemed non-conforming. A two-parameter TB meter has exactly
two possible conformance levels (conforming, non-conforming). TB
implementation details are discussed an
IP header). DSCP Markers may also act on unmarked packets (e.g. those
submitted with DSCP of zero) or may re-mark previously marked packets.
In particular, the model supports the application of marking based on a
preceding classifier match. The mark set in Appendix A. Note that this is a
"lenient" meter that allows some borrowing, as discussed above.

A two-parameter TB meter might appear as follows:

      Meter3:
      Type:                SimpleTokenBucket
      Profile:             Profile3
      ConformingOutput:    Queue1
      NonConformingOutput: AbsoluteDropper1

      Profile3:
      Type:                SimpleTokenBucket
      AverageRate:         200 kbps
      BurstSize:           100 kbytes

5.2.4.  Multi-Stage Token Bucket Meter

More complicated TB meters might define two burst sizes packet will determine its
subsequent PHB treatment in downstream nodes of a network and three
conformance levels. Packets found to exceed the larger burst size possibly
also in subsequent processing stages within this router.

DSCP Markers for Diffserv are
deemed non-conforming. Packets found normally parameterized by a single
parameter: the 6-bit DSCP to exceed be marked in the smaller burst size
are deemed partially conforming. Packets exceeding neither packet header.

      Marker1:
      Type:                DSCPMarker
      Mark:                010010

6.2.  Absolute Dropper

Absolute Droppers simply discard packets. There are deemed
conforming. Token bucket meters designed no parameters for Diffserv networks are
described in more detail in [SRTCM, TRTCM, GTC]; in some of
these
references three levels of conformance are discussed in terms droppers. Because this Absolute Dropper is a terminating point of colors,
with green representing conforming, yellow representing partially
conforming
the datapath and red representing non- conforming. Often these multi-

conformance level meters can be implemented using an appropriate
configuration of multiple two- parameter TB meters.

A profile for a multi-stage TB meter with three levels of conformance
might look as follows:

      Meter4:
      Type:                TwoRateTokenBucket
      ProfileA:            Profile4
      ConformingOutputA:   Queue1
      ProfileB:            Profile5
      ConformingOutputB:   Marker1
      NonConformingOutput: AbsoluteDropper1

      Profile4:
      Type:                SimpleTokenBucket
      AverageRate:         100 kbps
      BurstSize:           20 kbytes

      Profile5:
      Type:                SimpleTokenBucket
      AverageRate:         100 kbps
      BurstSize:           100 kbytes

5.2.5.  Null Meter

A null meter has only one output: always conforming, and no associated
temporal profile. Such a meter outputs it is useful probably desirable to define in the event that forward the
configuration or management interface does
packet through a Counter Action first for instrumentation purposes.

      AbsoluteDropper1:
      Type:                AbsoluteDropper

Absolute Droppers are not have the flexibility only elements than can cause a packet to
omit
be discarded: another element is an Algorithmic Dropper element (see
Section 7.1.3). However, since this element's behavior is closely tied
the state of one or more queues, we choose to distinguish it as a meter in
separate functional datapath element.

6.3.  Multiplexor

It is occasionally necessary to multiplex traffic streams into a
functional datapath segment.

      Meter5: element with a single input. A M:1 (fan-in)
multiplexor is a simple logical device for merging traffic streams. It
is parameterized by its number of incoming ports.

      Mux1:
      Type:                NullMeter                Multiplexor
      Output:              Queue1

6.  Action Elements

The classifiers and meters described up to this point are fan-out
elements which are generally used to determine the appropriate              Queue2

6.4.  Counter

One passive action is to
apply to account for the fact that a packet. data packet was
processed. The set of possible actions statistics that can then result might be applied
include:

-    Marking

-    Absolute Dropping

-    Multiplexing

-    Counting

-    Null action - do nothing

The corresponding action elements used later for customer
billing, service verification or network engineering purposes. Counters
are described in the following
sections.

Diffserv nodes may apply shaping, policing and/or marking to traffic
streams that exceed the bounds of their TCS in order to prevent a
traffic stream from seizing more than its share of resources from a
Diffserv network. Shaping, sometimes considered as a TC action, is
treated as 1:1 functional datapath elements which update a part of the queueing module in this model, as is the use of
Algorithmic Dropping techniques - see section 7.  Policing is modelled
as either counter by L and a concatenation of
packet counter by 1 every time a Meter with L-byte sized packet passes through
them. Counters can be used to count packets about to be dropped by an
Absolute Dropper or as a
concatenation of an Algorithmic Dropper with a Scheduler. These elements
will discard to count packets which exceed arriving at or departing from some
other functional element.

      Counter1:
      Type:                Counter
      Output:              Queue1

6.5.  Null Action

A null action has one input and one output. The element performs no
action on the TCS.  Marking packet. Such an element is performed by a
Marker Action, which (in this context) alters useful to define in the DSCP, and thus event
that the
PHB, of configuration or management interface does not have the packet
flexibility to give it omit an action element in a lower-grade treatment at subsequent
Diffserv nodes.

6.1.  Marker

Markers are 1:1 datapath segment.

      Null1:
      Type:                Null
      Output:              Queue1

7.  Queueing Elements

Queueing elements which set a codepoint (e.g. modulate the DSCP in an IP
header). Markers may also act on unmarked packets (e.g. those submitted
with DSCP transmission of zero) or may re-mark previously marked packets. In
particular, the model supports packets belonging to the application of marking based on a
preceding classifier match. The mark set in a packet will determine its
subsequent treatment in downstream nodes of a network
different traffic streams and determine their ordering, possibly also
in subsequent processing stages within this router.

DSCP Markers for Diffserv are normally parameterized by a single
parameter: the 6-bit DSCP to be marked in the packet header.

      Marker1:
      Type:                DSCPMarker
      Mark:                010010

6.2.  Absolute Dropper

Absolute Droppers simply discard packets. There storing
them temporarily or discarding them. Packets are no parameters for
these droppers. Because this Algorithmic Dropper usually stored either
because there is a terminating point
of resource constraint (e.g., available bandwidth) which
prevents immediate forwarding, or because the datapath and has no outputs, it queueing block is probably desirable being
used to forward alter the packet through temporal properties of a Counter Action first for instrumentation purposes.

      AbsoluteDropper1:

      Type:                AbsoluteDropper

Absolute Droppers traffic stream (i.e.
shaping). Packets are not the only elements than can cause discarded either because of buffering limitations,
because a packet to
be discarded: another element buffer threshold is an Algorithmic Dropper element (see
Section 7.1.3). However, since this element's behavior exceeded (including when shaping is closely tied
the state of one or more queues, we choose to distinguish it
performed), as a
separate functional datapath element.

6.3.  Multiplexor

It is occasionally necessary feedback control signal to multiplex traffic streams into reactive control protocols
such as TCP, because a
functional datapath element with meter exceeds a single input. A M:1 (fan-in)
multiplexor is configured profile (i.e.
policing).

The queueing elements in this model represent a simple logical device for merging traffic streams. It
is parameterized by its number abstraction of incoming ports.

      Mux1:
      Type:                Multiplexor
      Output:              Queue2

6.4.  Counter

One passive action a
queueing system, which is used to account for the fact that a data packet was
processed. configure PHB-related parameters.  The statistics that result might be used later for customer
billing, service verification or network engineering purposes. Counters
are 1:1 functional datapath elements which update a counter by L and a
packet counter by 1 every time a L-byte sized packet passes through
them. Counters
model can be used to count packets about to be be dropped by an
Absolute Dropper or to count packets arriving at or departing from some
other functional element.

      Counter1:
      Type:                Counter
      Output:              Queue1

6.5.  Null Action

A null action has one input and one output. The element performs no
action on the packet. Such an element is useful to define represent a broad variety of possible
implementations. However, it need not necessarily map one-to-one with
physical queueing systems in a specific router implementation.
Implementors should map the event
that the configuration or management interface does not have configurable parameters of the
flexibility

implementation's queueing systems to omit an action these queueing element in a datapath segment.

      Null1:
      Type:                Null
      Output:              Queue1

7. parameters
as appropriate to achieve equivalent behaviors.

7.1.  Queueing Elements Model

Queueing elements modulate the transmission is a function which lends itself to innovation. It must be
modelled to allow a broad range of packets belonging possible implementations to the
different traffic streams and determine their ordering, possibly storing
them temporarily or discarding them. Packets are usually stored either
because there is a resource constraint (e.g., available bandwidth) which
prevents immediate forwarding, or because the queueing block is being
used to alter the temporal properties of a traffic stream (i.e.
shaping). Packets are discarded either because of buffering limitations,
because a buffer threshold is exceeded (including when shaping is
performed), as a feedback control signal to reactive control protocols
such as TCP, because a meter exceeds a configured profile (i.e.
policing).

The queueing elements in this model represent a logical abstraction of a
queueing system, which is used to configure PHB-related parameters.  The
model can be used to represent a broad variety of possible
implementations. However, it need not necessarily map one-to-one with
physical queueing systems in a specific router implementation.
Implementors should map the configurable parameters of the
implementation's queueing systems to these queueing element parameters
as appropriate to achieve equivalent behaviors.

7.1.  Queueing Model

Queueing is a function a which lends itself to innovation. It must be
modelled to allow a broad range of possible implementations to be
represented using common structures be
represented using common structures and parameters. This model uses
functional decomposition as a tool to permit the needed lattitude.

Queueing systems perform three distinct, but related, functions:  they
store packets, they modulate the departure of packets belonging to
various traffic streams and they selectively discard packets. This model
decomposes the queueing block into the component elements that perform each of
these functions: Queues, Schedulers and Algorithmic Droppers,
respectively.  These elements may be connected together as part of a TCB
containing one or more Queues, zero or more Algorithmic Droppers and one
or more Schedulers.
TCB, as described in section 8.

The remainder of this section discusses FIFO Queues: typically, the
Queue element of this model will be implemented as a FIFO data
structure. However, this does not preclude implementations which are not
strictly FIFO, in that they also support operations that remove or
examine packets (e.g., for use by discarders) other than at the head or
tail. However, such operations MUST NOT have the effect of reordering
packets belonging to the same microflow.

Note that the term FIFO has multiple different common usages: it is
sometimes taken to mean, among other things, a data structure that
permits items to be removed only in the order in which they were
inserted or a service discipline which is non- reordering. non-reordering.

7.1.1.  FIFO Queue

In this model, a FIFO Queue element is a data structure which at any
time may contain zero or more packets. It may have one or more
thresholds associated with it. A FIFO has one or more inputs and exactly
one output. It must support an enqueue operation to add a packet to the
tail of the queue, queue and a dequeue operation to remove a packet from the
head of the queue. Packets must be dequeued in the order in which they
were enqueued. A FIFO has a current depth, which indicates the number of
packets and/or bytes that it contains at a particular time. FIFOs in
this model are modelled without inherent limits on their depth -
obviously this does not reflect the reality of implementations: FIFO
size limits are modelled here by an algorithmic dropper associated with
the FIFO, typically at its input. It is quite likely that, that every FIFO
will be preceded by an algorithmic dropper.  One exception might be the
case where the packet stream has already been policed to a profile that

can never exceed the scheduler bandwidth available at the FIFO's output
- this would not need an algorithmic dropper at the input to the FIFO.

This representation of a FIFO allows for one common type of depth limit,
one that results from a FIFO supplied from a limited pool of buffers,
shared between multiple FIFOs.

In an implementation, packets are presumably stored in one or more
buffers. Buffers are allocated from one or more free buffer pools. If
there are multiple instances of a FIFO, their packet buffers may or may
not be allocated out of the same free buffer pool. Free buffer pools may
also have one or more threshold thresholds associated with them, which may affect
discarding and/or scheduling. Other than this, buffering mechanisms are
implementation specific and not part of this model.

A FIFO might be represented using the following parameters:

     Queue1:
     Type:       FIFO
     Output:     Scheduler1

Note that a FIFO must provide triggers and/or current state information
to other elements upstream and downstream from it: in particular, it is
likely that the current depth will need to be used by Algorithmic
Dropper elements placed before or after the FIFO. It will also likely
need to provide an implicit "I have packets for you" signal to
downstream Scheduler elements.

7.1.2.  Scheduler

A scheduler is an element which gates the departure of each packet that
arrives at one of its inputs, based on a service discipline. It has one
or more input inputs and exactly one output. Each input has an upstream
element to which it is connected, and a set of parameters that affects
the scheduling of packets received at that input.

The service discipline (also known as a scheduling algorithm) is an
algorithm which might take any of the following as its input(s):

a)   static parameters such as relative priority associated with each of
     the scheduler's inputs.

b)   absolute token bucket parameters for maximum or minimum rates
     associated with each of the scheduler's inputs.

c)   parameters, such as packet length or DSCP, associated with the
     packet currently present at its input.

d)   absolute time and/or local state.

Possible service disciplines fall into a number of categories, including
(but not limited to) first come, first served (FCFS), strict priority,
weighted fair bandwidth sharing (e.g., WFQ, WRR, etc.), (e.g. WFQ), rate-limited strict priority
and rate-based. Service disciplines can be further distinguished by
whether they are work-conserving or non-work-conserving (see Glossary).
Non-work-conserving schedulers can be used to shape traffic streams to
match some profile by delaying packets that might be deemed non-conforming non-
conforming by some downstream node: a packet is delayed until such time
as it would conform to a downstream meter using the same profile.

[DSARCH] defines PHBs without specifying required scheduling algorithms.
However, PHBs such as  the class selectors [DSFIELD], EF [EF-PHB] and AF
[AF-PHB] have descriptions or configuration parameters which strongly
suggest the sort of scheduling discipline needed to implement them. This
memo
document discusses a minimal set of queue parameters to enable
realization of these per- hop behaviors. PHBs. It does not attempt to specify an all-
embracing set of parameters to cover all possible implementation models.
A mimimal set includes:

a)   a minimum service rate profile which allows rate guarantees for
     each traffic stream as required by EF and AF without specifying the
     details of how excess bandwidth between these traffic streams is
     shared. Additional parameters to control this behavior should be
     made available, but are dependent on the particular scheduling
     algorithm implemented.

b)   a service priority, used only after the minimum rate profiles of
     all inputs have been satisfied, to decide how to allocate any
     remaining bandwidth.

c)   a maximum service rate profile, for use only with a non-work-
     conserving service discipline.

For an implementation

Any one of these profiles is composed, for the EF PHB using purposes of this model,
of both a strict priority rate (in suitable units of bits, bytes or larger chunks in
some unit of time) and a burst size, as discussed further in Appendix A.

By way of example, for an implementation of the EF PHB using a strict
priority scheduling algorithm that assumes that the aggregate EF rate
has been appropriately bounded by upstream policing to avoid starvation, starvation
of other BAs, the service rate profiles are not used: the minimum
service rate profile would be reported
as defaulted to zero and the maximum service
rate profile would effectively be reported as line rate. the "line rate".  Such an
implementation, with multiple priority classes, could also be used for
the Diffserv class selectors [DSFIELD].

Alternatively, setting the service priority values for each input to the
scheduler to the same value enables the scheduler to satisfy the minimum
service rates for each input, so long as the sum of all minimum service
rates is less than or equal to the line rate.

For example, a non-work-conserving scheduler, allocating spare bandwidth
equally between all its inputs, might be represented using the following
parameters:

     Scheduler1:
     Type:           Scheduler2Input

     Input1:
     MaxRateProfile: Profile1
     MinRateProfile: Profile2
     Priority:       none

     Input2:
     MaxRateProfile: Profile3
     MinRateProfile: Profile4
     Priority:       none

A work-conserving scheduler might be represented using the following
parameters:

     Scheduler2:
     Type:           Scheduler3Input

     Input1:
     MaxRateProfile: WorkConserving
     MinRateProfile: Profile5
     Priority:       1

     Input2:
     MaxRateProfile: WorkConserving
     MinRateProfile: Profile6
     Priority:       2

     Input3:
     MaxRateProfile: WorkConserving
     MinRateProfile: none
     Priority:       3

7.1.3.  Algorithmic Dropper

An Algorithmic Dropper is an element which selectively discards packets
that arrive at its input, based on a discarding algorithm. It has one

data input and one output.  In this model (but not necessarily in a real
implementation), a packet enters the dropper at its input and either its
buffer is returned to a free buffer pool or the packet exits the dropper
at the output.

Alternatively, an Algorithmic Dropper may invoke can be thought of as invoking
operations on a FIFO which selectively removes remove a packet, then packet and return its
buffer to the free buffer pool, pool based on a discarding algorithm. In this
case, the operation ould could be modelled as being a side-effect on the FIFO
upon which it operated, rather than as having a discrete input and
output.  These two
treatments are  This treatment is equivalent and we choose the former here. one described in
the previous paragraph for this model.

The Algorithmic Dropper is modelled as having a single input.  It is
possible that packets which were classified differently by a Classifier
in this TCB will end up passing through the same dropper. The dropper's
algorithm may need to apply different calculations based on
characteristics of the incoming packet e.g. its DSCP. So there is a
need, in implementations of this model, to be able to relate information
about which classifier element was matched by a packet from a Classifier
to an Algorithmic Dropper.  In the rare cases where this is required,
the chosen model is to insert another Classifier element at this point
in the flow and for it to feed into multiple Algorithmic Dropper
elements, each one implementing a drop calculation that is independent
of any classification keys of the packet: this will likely require the
creation of a new TCB to contain the Classifier and the Algorithmic
Dropper elements.

     NOTE: There are many other formulations of a model that could
     represent this
linkage, other than linkage that are different to the one described
     above: one formulation would have been to have a pointer from one
     of the drop probability calculation algorithms inside the dropper
     to the original Classifier element that selects this algorithm.
     Another way would have been to have multiple "inputs" to the
     Algorithmic Dropper element fed from the preceding elements,
     leading eventually back to the Classifier elements that matched the
     packet. Yet another formulation might have been for the Classifier
     to (logically) include some sort of "classification identifier"
     along with the packet along its path, for use by any subsequent
     element. And yet another could have been to include a classifier
     inside the dropper, in order for it to pick out the drop algorithm
     to be applied. All of these These other approaches could be used by
     implementations but were deemed to be
more clumsy or less useful clear than the approach
     taken here.

An Algorithmic Dropper, shown illustrated in Figure 5, has one or more
triggers that cause it to make a decision whether or not to drop one (or

possibly more than one) packet. A trigger may be internal (the arrival
of a packet at the input to the dropper) or it may be external
(resulting from one or more state changes at another element, such as a
FIFO depth exceeding crossing a threshold or a scheduling event). It is likely
that an instantaneous FIFO depth will need to be smoothed over some
averaging interval. Some dropping algorithms may require several trigger
inputs feeding back from events elsewhere in the system e.g. depth-
smoothing functions that calculate averages over more than one time
interval.  Smoothing functions are outside the scope of this document
and are not modelled here, we merely indicate where they might be added
in the model.

A trigger may be a boolean combination of events (e.g. a FIFO depth
exceeding a threshold OR a buffer pool depth falling below a threshold).

The dropping algorithm makes a decision on whether to forward or to
discard a packet. packet and, if discarding, whether to discard it from the
head, tail or other part of the associated queue.  It takes as its
parameters some set of dynamic parameters (e.g. averaged e.g. smoothed or instantaneous

                 +--------------------------------------+
                 | +------------+        +-----------+  |Algorithmic
                 | | smoothing  |    n   |trigger &  |  |Dropper
                 | | function(s)|---/--->|discard    |  |
                 | | (optional) |        |calc.      |  |
                 | +------------+        +-----------+  |
                 |            ^     TailDrop| |HeadDrop |
                 +------------|-------------|-|---------+
                              |             | |
                          +---|-------------+ |
                          |   |               |
                          v   |Depth          v
    Input                ----------------------+     Output
  -----------------------------> |x|x|x|x|x|x|x|------------------->
                         ----------------------+
                           FIFO depth) and     |
                                    |
                                  | | |
                                  | v | bit-bucket
                                  +---+

      Figure 5. Algorithmic Dropper + Queue

FIFO depth, some set of static parameters (e.g. thresholds) e.g. thresholds, and possibly
other parameters associated with the packet (e.g. its PHB, as determined by a classifier, which will
determine on which of the droppers inputs the packet arrives). packet. It may also have internal
state and is likely to keep counters regarding the dropped packets
(there is no appropriate place here to include a Counter Action
element).

RED, RED-on-In-and-Out (RIO) and Drop-on-threshold are examples of
dropping algorithms. Tail-dropping and head-dropping are effected by the
location of the dropper relative to the FIFO.  Note that, although an Algorithmic Dropper may require
knowledge of data fields in a packet, as discovered by a Classifier in
the same TCB, it may not modify the packet (i.e. it is not a marker).

                 +--------------------------------------+
                 | +------------+        +-----------+  |Algorithmic
                 | | smoothing  |    n   |trigger &  |  |Dropper
                 | | function(s)|---/--->|discard    |  |
                 | | (optional) |        |calc.      |  |
                 | +------------+        +-----------+  |
                 |            ^     TailDrop| |HeadDrop |
                 +------------|-------------|-|---------+
                              |             | |
                          +---|-------------+ |
                          |   |               |
                          v   |Depth          v
    Input                ----------------------+

RED, RED-on-In-and-Out (RIO) and Drop-on-threshold are examples of
dropping algorithms. Tail-dropping and head-dropping are effected by the
location of the dropper relative to Scheduler
  -----------------------------> |x|x|x|x|x|x|x|------------------->
                         ----------------------+
                           FIFO     |
                                    |
                                  | | |
                                  | v | bit-bucket
                                  +---+

      Figure 5. Algorithmic Dropper + Queue

An Algorithmic Dropper which uses the FIFO.

For example, a dropper using a RIO algorithm might be represented using
2 Algorithmic Droppers with the following parameters:

      AlgorithmicDropper1: (for in-profile traffic)
      Type:                   AlgorithmicDropper
      Discipline:             RIO             RED, discard from tail
      Trigger:                Internal
      Output:                 Fifo1

      InputA: (in profile)
      MinThresh:              Fifo1.Depth > 20 kbyte
      MaxThresh:              Fifo1.Depth > 30 kbyte

      InputB: (out of profile)
      SampleWeight            .002
      MaxDropProb             1%

      AlgorithmicDropper2: (for out-of-profile traffic)
      Type:                   AlgorithmicDropper
      Discipline:             RED, discard from tail
      Trigger:                Internal
      Output:                 Fifo1
      MinThresh:              Fifo1.Depth > 10 kbyte
      MaxThresh:              Fifo1.Depth > 20 kbyte
      SampleWeight            .002
      MaxDropProb             1%             2%

Another form of Algorithmic Dropper, a threshold-dropper, might be
represented using the following parameters:

      AlgorithmicDropper2:

      AlgorithmicDropper3:
      Type:                   AlgorithmicDropper
      Discipline:             Drop-on-threshold             Drop-on-threshold, discard from tail
      Trigger:                Fifo2.Depth > 20 kbyte
      Output:                 Fifo1

Yet another Algorithmic Dropper which drops all out-of-profile packets
whenever the FIFO threshold exceeds a certain depth (this Algorithmic
Dropper is not part of the larger TCB example) might be represented with
the following parameters:

      AlgorithmicDropper3:
      Type:                   AlgorithmicDropper2Input
      Discipline:             Drop-out-packets-on-threshold
      Output:                 Fifo3

      InputA: (in profile)
      Trigger:                none
      InputB: (out of profile)
      Trigger:                Fifo3.Depth > 100 kbyte

7.1.4.  Constructing queueing blocks from the elements

A

7.2.  Sharing load among traffic streams using queueing block is constructed by concatenation of these functional
datapath elements.  Elements of the same type may appear more than once

Queues are used, in Differentiated Services, for a queueing block, either in parallel or number of purposes.
In essence, they are simply places to store traffic until it is
transmitted.  However, when several queues are used together in series. Typically, a
queueing block will have relatively many elements in parallel and few in
series.  Iteration and recursion are not supported constructs system, they can also achieve effects beyond that for given
traffic streams. They can be used to limit variation in this
grammar. A queueing block must have at least one Queue, zero delay or more
Algorithmic Droppers and at least one Scheduler. The following inter-
connections are allowed:

1)   The input of impose
a Queue may be the input of the queueing block or it
     may be connected maximum rate (shaping), to the output of an Algorithmic Dropper or permit several streams to an
     output of a Scheduler.

2)   Each input of share a Scheduler may be connected to the output of link in a
     Queue, to the output of an Algorithmic Dropper
semi-predictable fashion (load sharing), or to the output of
     another Scheduler.

3)   The input of an Algorithmic Dropper must be the input of the
     queueing block.

4)   The output of the queueing block may be the output of a Queue, an
     Algorithmic Dropper or a Scheduler.

Note, in particular, that Schedulers may operate move variation in series delay
from some streams to other streams.

Traffic shaping is often used to condition traffic such that packets
arriving in a
packet at the head of a Queue feeding the concatenated Schedulers is
serviced only after all of the scheduling criteria are met. For example,
a Queue which carries EF traffic streams may burst will be served first "smoothed" and deemed conforming by
subsequent downstream meters in this or other nodes.  In [DSARCH] a non-
work-conserving Scheduler to shape the stream to
shaper is described as a maximum rate, then by
a work-conserving Scheduler to mix EF traffic streams with other traffic
streams. Alternatively, there might be a Queue and/or a dropper between
the two Schedulers.

7.2.  Sharing load among traffic streams using queueing

Queues are used, in Differentiated Services, for a number of purposes.
In essence, they are simply places to store traffic until it is
transmitted.  However, when several queues are used together in a
queueing system, they can also achieve effects beyond that for given
traffic streams. They can be used to limit variation in delay or impose
a maximum rate (shaping), to permit several streams to share a link in a
semi-predictable fashion (load sharing), or to move variation in delay
from some streams to other streams.

Traffic shaping is often used to condition traffic such that packets
arriving in a burst will be "smoothed" and deemed conforming by
subsequent downstream meters in this or other nodes. Shaping may also be
used to isolate certain traffic streams from the effects of other
traffic streams of the same BA.

In [DSARCH] a shaper is described as a queueing element controlled queueing element controlled by a meter which
defines its temporal profile. However, this representation of a shaper
differs substantially from typical shaper implementations.

In this conceptual model, the model described here, a shaper is realized by using a non-work-
conserving Scheduler. Some implementations may elect to have queues
whose sole purpose is shaping, while others may integrate the shaping
function with other buffering, discarding and scheduling associated with
access to a resource. Shapers operate by delaying the departure of
packets that would be deemed non-conforming by a meter configured to the
shaper's maximum service rate profile. The packet is scheduled to depart
no sooner than such time that it would become conforming.

7.2.1.  Load Sharing

Load sharing is the traditional use of queues. It was theoretically
explored in a paper by Floyd [FLOYD] in 1993, but has been in use in
communications systems since the 1970's.

[DSARCH] discusses load sharing as dividing an interface among traffic
classes predictably or applying a minimum rate to each of a set of
traffic classes, which might be measured as an absolute lower bound on
the rate a traffic stream achieves, achieves or a fraction of the rate an
interface offers. It is generally implemented as some form of weighted
round robin
queueing algorithm among a set of FIFO queues or i.e. a WFQ system. scheme. This
has interesting side-effects.

A key effect sought is to ensure that the mean rate the traffic in a
stream experiences is never lower than some threshold when there is at
least that much traffic to send. When there is less traffic than this,
the queue tends to be starved for of traffic, meaning that the queuing
system will not delay its traffic by very much. When there is
significantly more traffic and the queue fills, starts filling, packets in this

class will be delayed significantly more than traffic in other classes
that are under-
using under-using their available capacity. This form of queuing
system therefore tends to move delay and variation in delay from under-used under-
used classes of traffic to heavier users, as well as managing the rates
of the traffic streams.

A side-effect of a WRR or WFQ implementation is that between any two
packets in a given traffic class, the scheduler may emit one or more
packets from each of the other classes in the queuing system. In cases
where average behavior is in view, this is perfectly acceptable. In
cases where traffic is very intolerant of jitter and there are a number
of competing classes, this may have undesirable consequences.

7.2.2.  Traffic Priority

Traffic Prioritization is a special case of load sharing, wherein a
certain traffic class is deemed so jitter-intolerant that if it has
traffic present, that traffic must be sent at the earliest possible
time. By extension, several priorities might be defined, such that
traffic in each of several classes is given preferential service over
any traffic of a lower class. It is the obvious implementation of IP
Precedence as described in [RFC 791], of 802.1p traffic classes
[802.1D], [802.1D]
and other similar technologies.

Priority is often abused in real networks; people tend to think that
traffic which has a high business priority deserves this treatment, treatment and
talk more about the business imperatives than the actual application
requirements. This can have severe consequences; networks have been
configured which placed business-critical traffic at a higher priority
than routing routing-protocol traffic, resulting in congestive collapse of the networks. network's
management or control systems. However, it has may have a legitimate use in for
services like EF, based on an Expedited Forwarding (EF) PHB, where it is
absolutely known, due sure, thanks to policing, policing at all possible traffic entry
points, that a traffic stream does not abuse its rate, rate and that the
application is indeed jitter-intolerant enough to merit this type of
handling.  Note that, even in cases with well-policed ingress points,
there is still the possibility of unexpected traffic loops within an un-
policed core part of the network causing such collapse.

8.  Traffic Conditioning Blocks (TCBs)

The Classifier, Meter, Action, Algorithmic Dropper, Queue and Scheduler
functional datapath elements described above can be combined into
Traffic Conditioning Blocks (TCBs). A TCB is an abstraction of a set of
functional datapath elements that may be used to facilitate the
definition of specific traffic conditioning functionality e.g. it might

be likened to a template which can be replicated multiple times for
different traffic streams or different customers. It has no likely
physical representation in the implementation of the data path: it is
invented purely as an abstraction for use by management tools.

This model describes the configuration and management of a Diffserv
interface in terms of a TCB that contains, by definition, zero or more
Classifier, Meter, Action, Algorithmic Dropper, Queue and Scheduler
elements.  These elements are arranged arbitrarily according to the
policy being expressed, but always in the order here. Traffic may be
classified; classified traffic may be metered; each stream of traffic
identified by a combination of classifiers and meters may have some set
of actions performed on it, followed by drop algorithms; packets of the
traffic stream may ultimately be stored into a queue and then be
scheduled out to the next TCB or physical interface.  It is possible permissible
to omit elements or include null elements of any type, or to concatenate
multiple functional datapath elements of the same type.

When the Diffserv treatment for a given packet needs to have those such
building blocks repeated, this is performed by cascading multiple TCBs:
an output of one TCB may drive the input of a succeeding one. For
example, consider the case where traffic of a set of classes is shaped
to a set of rates, but the total output rate of the group of classes
must also be limited to a rate. One might imagine a set of network news
feeds, each with a certain maximum rate, and a policy that their
aggregate may not exceed some figure. This may be simply accomplished by
cascading two TCBs. The first classifies the traffic into its separate
feeds and queues each feed separately. The feeds (or a subset of them)
are now fed into a second TCB, which places all input (these news feeds)
into a single queue with a certain maximum rate. In implementation, one
could imagine this as the several literal queues, a CBQ or WFQ system
with an appropriate (and complex) weighting scheme, or a number of other
approaches. But they would have the same externally measurable effect on
the traffic as if they had been literally implemented with separate
TCBs.

8.1.  TCB

A generalised TCB might consist of the following stages:
  - Classification stage
  - Metering stage
  - Action stage
  - Algorithmic Dropping stage (involving Markers, Absolute Droppers,
      Counters and Multiplexors)
  - Queueing stage
  - Scheduling stage (involving Algorithmic Droppers, Queues
      and Schedulers)

where each stage may consist of a set of parallel datapaths consisting
of pipelined elements.

A Classifier or a Meter is typically a 1:N element, an Action,
Algorithmic Dropper or Queue is typically a 1:1 element and a Scheduler
is a N:1 element. A complete TCB should, however, result in a 1:1 or 1:N
abstract element. Note that the fan-in or fan-out of an element is not
an important defining characteristic of this taxonomy.

8.1.  An Example TCB

A SLS is presumed

8.1.1.  Building blocks for Queueing

Some particular rules are applied to have been negotiated between the customer and the
provider which specifies the handling ordering of the customer's traffic by the
provider's network. The agreement might be elements within a
Queueing stage within a TCB: elements of the following form:

   DSCP     PHB   Profile     Treatment
   ----     ---   -------     ----------------------
   001001   EF    Profile4    Discard non-conforming.
   001100   AF11  Profile5    Shape to profile, tail-drop when full.
   001101   AF21  Profile3    Re-mark non-conforming to DSCP 001000,
                                 tail-drop when full.
   other    BE    none        Apply RED-like dropping.

This SLS specifies that the customer same type may submit packets marked for DSCP
001001 which appear more
than once, either in parallel or in series. Typically, a queueing stage
will get EF treatment so long as they remain conforming to
Profile1 have relatively many elements in parallel and will be discarded if they exceed this profile. The
discarded packets few in series.
Iteration and recursion are counted not supported constructs (the elements are
arranged in this example, perhaps for use by an acyclic graph). The following inter-connections of
elements are allowed:

1)   The input of a Queue may be the
provider's sales department in convincing input of the customer to buy a larger
SLS.  Packets marked for DSCP 001100 will queueing block or it
     may be shaped connected to Profile2 before
forwarding. Packets marked for DSCP 001101 will be metered the output of an Algorithmic Dropper or to Profile3
with non-conforming packets "downgraded" by being re-marked with an
     output of a DSCP Scheduler.

2)   Each input of 001000.  It is implicit in this agreement that conforming packets are
given the PHB originally indicated by the packets' DSCP field.

Figures 6 and 7 illustrates a TCB that might Scheduler may be used connected to handle this SLS
at the output of a
     Queue, to the output of an ingress interface at Algorithmic Dropper or to the customer/provider boundary. output of
     another Scheduler.

3)   The Classification stage input of this example consists an Algorithmic Dropper must be the first element of a single BA
classifier.
     the queueing stage, the output of another Algorithmic Dropper.

4)   The BA classifier output of the queueing block may be the output of a Queue, an
     Algorithmic Dropper or a Scheduler.

Note, in particular, that Schedulers may operate in series such that a
packet at the head of a Queue feeding the concatenated Schedulers is used to separate
serviced only after all of the scheduling criteria are met. For example,
a Queue which carries EF traffic based on streams may be served first by a non-
work-conserving Scheduler to shape the
Diffserv service level requested stream to a maximum rate, then by
a work-conserving Scheduler to mix EF traffic streams with other traffic
streams. Alternatively, there might be a Queue and/or a dropper between
the two Schedulers.

8.2.  An Example TCB

A SLS is presumed to have been negotiated between the customer (as indicated and the
provider which specifies the handling of the customer's traffic, as
defined by a TCS) by the
                          +-----+
                          |    A|---------------------------> provider's network. The agreement might be of

the following form:

   DSCP     PHB   Profile     Treatment
   ----     ---   -------     ----------------------
   001001   EF    Profile4    Discard non-conforming.
   001100   AF11  Profile5    Shape to Queue1
                       +->|     |
                       |  |    B|--+  +-----+    +-----+
                       |  +-----+  |  |     |    |     |
                       |  Meter1   +->|     |--->|     |
                       |              |     |    |     |
                       |              +-----+    +-----+
                       |              Counter1   Absolute
 submitted +-----+     |                         Dropper1
 traffic   |    A|-----+
 --------->|    B|----------------------------------------> to Dropper1
           |    C|-----+
           |    X|--+  |
           +-----+  |  |  +-----+                +-----+
         Classifier1|  |  |    A|--------------->|A    |
            (BA)    |  +->|     |                |     |--> to Dropper2
                    |     |    B|--+  +-----+ +->|B    |
                    |     +-----+  |  |     | |  +-----+
                    |     Meter2   +->|     |-+    Mux1
                    |                 |     |
                    |                 +-----+
                    |                 Marker1
                    +-------------------------------------> profile, tail-drop when full.
   001101   AF21  Profile3    Re-mark non-conforming to Dropper3

      Figure 6:  An Example Traffic Conditioning Block (Part 1) DSCP in each submitted packet's IP header). We illustrate three 001000,
                                 tail-drop when full.
   other    BE    none        Apply RED-like dropping.

This SLS specifies that the customer may submit packets marked for DSCP
filter values: A, B
001001 which will get EF treatment so long as they remain conforming to
Profile4 and C. will be discarded if they exceed this profile. The 'X'
discarded packets are counted in this example, perhaps for use by the
provider's sales department in convincing the BA classifier is customer to buy a wildcard
filter that matches every packet not otherwise matched.

The path larger
SLS.  Packets marked for DSCP 001100 proceeds directly will be shaped to Dropper1 whilst the paths Profile5 before
forwarding. Packets marked for DSCP 001001 and 001101 include a metering stage. All other traffic
is passed directly on will be metered to Dropper3. There is Profile3
with non-conforming packets "downgraded" by being re-marked with a separate meter for each
set DSCP
of packets corresponding to classifier outputs 001000.  It is implicit in this agreement that conforming packets are
given the PHB originally indicated by the packets' DSCP field.

Figures 6 and 7 illustrates a TCB that might be used to handle this SLS
at an ingress interface at the customer/provider boundary.

The Classification stage of this example consists of a single BA
classifier. The BA classifier is used to separate traffic based on the
Diffserv service level requested by the customer (as indicated by the
DSCP in each submitted packet's IP header). We illustrate three DSCP
filter values: A, B and C. The 'X' in the BA classifier is a wildcard
filter that matches every packet not otherwise matched.

The path for DSCP 001100 proceeds directly to Dropper1 whilst the paths
for DSCP 001001 and 001101 include a metering stage. All other traffic
is passed directly on to Dropper3. There is a separate meter for each
set of packets corresponding to classifier outputs A and C. Each meter
uses a specific profile, as specified in the TCS, for the corresponding
Diffserv service level. The meters in this example each indicate one of
two conformance levels: conforming or non-conforming.

Following the Metering stage is an Action stage in some of the branches.
Packets submitted for DSCP 001001 (Classifier output A) that are deemed
non- conforming
non-conforming by Meter1 are counted and discarded while packets that
are conforming are passed on to Queue1. Packets submitted for DSCP
001101 (Classifier output C) that are deemed non-conforming by Meter2
are re-marked and then both conforming and non-conforming packets are
multiplexed together before being passed on to Dropper2/Queue3.

The Algorithmic Dropping, Queueing and Scheduling stages are realised as
follows, illustrated in figure 7. Note that the figure does not show any
of the implicit control linkages between elements that allow e.g. an
Algorithmic Dropper

                          +-----+
                          |    A|---------------------------> to sense the current state of a succeeding Queue.
Conforming DSCP 001001 packets from Queue1
                       +->|     |
                       |  |    B|--+  +-----+    +-----+
                       |  +-----+  |  |     |    |     |
                       |  Meter1 are passed directly   +->|     |--->|     |
                       |              |     |    |     |
                       |              +-----+    +-----+
                       |              Counter1   Absolute
 submitted +-----+     |                         Dropper1
 traffic   |    A|-----+
 --------->|    B|--------------------------------------> to
Queue1: there is no AlgDropper1
           |    C|-----+
           |    X|--+  |
           +-----+  |  |  +-----+                +-----+
         Classifier1|  |  |    A|--------------->|A    |
            (BA)    |  +->|     |                |     |--> to AlgDrop2
                    |     |    B|--+  +-----+ +->|B    |
                    |     +-----+  |  |     | |  +-----+
                    |     Meter2   +->|     |-+    Mux1
                    |                 |     |
                    |                 +-----+
                    |                 Marker1
                    +-----------------------------------> to AlgDropper3

      Figure 6:  An Example Traffic Conditioning Block (Part 1)

The Algorithmic Dropping, Queueing and Scheduling stages are realised as
follows, illustrated in figure 7. Note that the figure does not show any
of the implicit control linkages between elements that allow e.g. an
Algorithmic Dropper to sense the current state of a succeeding Queue.
Conforming DSCP 001001 packets from Meter1 are passed directly to
Queue1: there is no way, with a configuration of the following Scheduler
that patches
to match the metering, for these packets to overflow the depth of Queue1
so there is never a no requirement for dropping at this point.  Packets marked
for DSCP 001100 must be passed through a tail-dropper,
Dropper1, AlgDropper1,
which serves to limit the depth of the following queue, Queue2: packets
that arrive to a full queue will be discarded - this discarded. This is likely to be an
error case: the customer is obviously not sticking to its agreed
profile.  Similarly, all packets from the original DSCP 001101 stream
(some may have been re-marked by this stage) are passed to
Dropper2 AlgDropper2
and Queue3.  Packets marked for all other DSCPs are passed to
Dropper3
AlgDropper3 which is a RED-like algorithmic dropper: Algorithmic Dropper: based on feedback
of the current depth of Queue4, this dropper is likely supposed to discard
enough packets from its input stream to keep the queue depth under
control.

These four Queue elements are then serviced by a Scheduler element
Scheduler1: tis this must be configured to give each of its inputs an
appropriate priority and/or bandwidth share. Inputs A and C are given
guarantees of bandwidth, as appropriate for the contracted profiles.
Input B is given a limit on the bandwidth it can use i.e. a non-work-
conserving discipline in order to achieve the desired shaping of this
stream.  Input D is given no limits or guarantees but a lower priority
than the other queues, appropriate for its best-effort status.  Traffic
then exits the Scheduler in a single orderly stream.

The interconnections of the TCB elements illustrated in Figures 6 and 7
can be represented textually as follows:

      TCB1:

      Classifier1:
      FilterA:             Meter1
      FilterB:             Dropper1
      FilterC:             Meter2
      Default:             Dropper3

      Meter1:
      Type:                AverageRate
      Profile:             Profile1
      ConformingOutput:    Queue1
      NonConformingOutput: Counter1

      Counter1:

    from Meter1                     +-----+
    ------------------------------->|     |----+
                                    |     |    |
                                    +-----+    |
                                    Queue1     |
                                               |  +-----+
    from Classifier1 +-----+        +-----+    +->|A    |
    ---------------->|     |------->|     |------>|B    |------->
                     |     |        |     |  +--->|C    |  exiting
                     +-----+        +-----+  | +->|D    |  traffic
                     AlgDropper1    Queue2   | |  +-----+
                                             | |  Scheduler1
    from Mux1        +-----+        +-----+  | |
    ---------------->|     |------->|     |--+ |
                     |     |        |     |    |
                     +-----+        +-----+    |
                     AlgDropper2    Queue3     |
                                               |
    from Classifier1 +-----+        +-----+    |
    ---------------->|     |------->|     |----+
                     |     |        |     |
                     +-----+        +-----+
                     AlgDropper3    Queue4

      Figure 7: An Example Traffic Conditioning Block (Part 2)
      FilterB:             Dropper1
      FilterC:             Meter2
      Default:             Dropper3

      Meter1:
      Type:                AverageRate
      Profile:             Profile4
      ConformingOutput:    Queue1
      NonConformingOutput: Counter1

      Counter1:
      Output:              AbsoluteDropper1

      Meter2:
      Type:                AverageRate
      Profile:             Profile3
      ConformingOutput:    Mux1.InputA
      NonConformingOutput: Marker1

      Marker1:
      Type:                DSCPMarker
      Mark:                001000
      Output:              Mux1.InputB

      Mux1:
      Output:              Dropper2

      AlgDropper1:
      Type:                AlgorithmicDropper
      Discipline:          Drop-on-threshold
      Trigger:             Queue2.Depth > 10kbyte
      Output:              Queue2

      AlgDropper2:
      Type:                AlgorithmicDropper
      Discipline:          Drop-on-threshold
      Trigger:             Queue3.Depth > 20kbyte
      Output:              Queue3

      AlgDropper3:
      Type:                AlgorithmicDropper
      Discipline:          RED93
      Trigger:             Internal
      Output:              Queue3
      MinThresh:           Queue3.Depth > 20 kbyte
      MaxThresh:           Queue3.Depth > 40 kbyte
         <other RED parms too>
      Queue1:
      Type:                FIFO
      Output:              Scheduler1.InputA

      Queue2:
      Type:                FIFO
      Output:              Scheduler1.InputB

      Queue3:
      Type:                FIFO
      Output:              Scheduler1.InputC

      Queue4:
      Type:                FIFO
      Output:              Scheduler1.InputD

      Scheduler1:
      Type:                Scheduler4Input
      InputA:
      MaxRateProfile:      none
      MinRateProfile:      Profile4
      Priority:            20
      InputB:
      MaxRateProfile:      Profile5
      MinRateProfile:      none
      Priority:            40
      InputC:
      MaxRateProfile:      none
      MinRateProfile:      Profile3
      Priority:            20
      InputD:
      MaxRateProfile:      none
      MinRateProfile:      none
      Priority:            10

8.2.

8.3.  An Example TCB to Support Multiple Customers

The TCB described above can be installed on an ingress interface to
implement a provider/customer TCS if the interface is dedicated to the
customer. However, if a single interface is shared between multiple
customers, then the TCB above will not suffice, since it does not
differentiate among traffic from different customers. Its classification
stage uses only BA classifiers.

The configuration is readily modified to support the case of multiple
customers per interface, as follows. First, a TCB is defined for each

customer to reflect the TCS with that customer: TCB1, defined above is
the TCB for customer 1 and 1. Similar elements are created for TCB2 and for
TCB3 which reflect the agreements with customers 2 and 3 respectively.
These 3 TCBs may or may not share the same contain similar elements and parameters.

Finally, a classifier is added to the front end to separate the traffic
from the three different customers. This forms a new TCB, TCB4, which is
illustrated in Figure 8.

A representation of this multi-customer TCB might be:

      TCB4:

      Classifier4:

      submitted +-----+
      traffic   |    A|--------> TCB1
      --------->|    B|--------> TCB2
                |    C|--------> TCB3
                |    X|------+   +-----+
                +-----+      +-->|     |
                Classifier4      +-----+
                                 AbsoluteDrop4

      Figure 8: An Example of a Multi-Customer TCB
      Filter1:     to TCB1
      Filter2:     to TCB2
      Filter3:     to TCB3
      No Match:    AbsoluteDropper4

      AbsoluteDropper4:
      Type:                AbsoluteDropper

      TCB1:
      (as defined above)

      TCB2:
      (similar to TCB1, perhaps with different
       elements or numeric parameters)

      TCB3:
      (similar to TCB1, perhaps with different
       elements or numeric parameters)

      submitted +-----+
      traffic   |    A|--------> TCB1
      --------->|    B|--------> TCB2
                |    C|--------> TCB3
                |    X|------+   +-----+
                +-----+      +-->|     |
                Classifier4      +-----+
                                 AbsoluteDrop4

      Figure 8: An Example of a Multi-Customer TCB

and the filters, based on each customer's source MAC address, could be
defined as follows:

      Filter1:
      Type:        MacAddress
      SrcValue:    01-02-03-04-05-06 (source MAC address of customer 1)
      SrcMask:     FF-FF-FF-FF-FF-FF
      DestValue:   00-00-00-00-00-00
      DestMask:    00-00-00-00-00-00

      Filter2:
      (similar to Filter1 but with customer 2's source MAC address as
      SrcValue)

      Filter3:
      (similar to Filter1 but with customer 3's source MAC address as
      SrcValue)

In this example, Classifier4 separates traffic submitted from different
customers based on the source MAC address in submitted packets. Those
packets with recognized source MAC addresses are passed to the TCB
implementing the TCS with the corresponding customer. Those packets with
unrecognized source MAC addresses are passed to a dropper.

TCB4 has a Classifier stage and an Action element stage performing
dropping of all unmatched traffic.

8.3.

8.4.  TCBs Supporting Microflow-based Services

The TCB illustrated above describes a configuration that might be
suitable for enforcing a SLS at a router's ingress. It assumes that the
customer marks its own traffic for the appropriate service level.  It
then limits the rate of aggregate traffic submitted at each service
level, thereby protecting the resources of the Diffserv network. It does
not provide any isolation between the customer's individual microflows.

A more complex example might be a TCB configuration that offers
additional functionality to the customer. It recognizes individual
customer microflows and marks each one independently. It also isolates
the customer's individual microflows from each other in order to prevent
a single microflow from seizing an unfair share of the resources
available to the customer at a certain service level. This is
illustrated in Figure 9.

Suppose that the customer has an SLS which specifices 2 service levels,
to be identifed to the provider by DSCP A and DSCP B.  Traffic is first
directed to a MF classifier which classifies traffic based on
miscellaneous classification criteria, to a granularity sufficient to
identify individual customer microflows. Each microflow can then be
                     +-----+   +-----+
    Classifier1      |     |   |     |---------------+
        (MF)      +->|     |-->|     |     +-----+   |
      +-----+     |  |     |   |     |---->|     |   |
      |    A|------  +-----+   +-----+     +-----+   |
  --->|    B|-----+  Marker1   Meter1      Absolute  |
      |    C|---+ |                        Dropper1  |   +-----+
      |    X|-+ | |  +-----+   +-----+               +-->|A    |
      +-----+ | | |  |     |   |     |------------------>|B    |--->
              | | +->|     |-->|     |     +-----+   +-->|C    | to TCB2
              | |    |     |   |     |---->|     |   |   +-----+
              | |    +-----+   +-----+     +-----+   |    Mux1
              | |    Marker2   Meter2      Absolute  |
              | |                          Dropper2  |
              | |    +-----+   +-----+               |
              | |    |     |   |     |---------------+
              | |--->|     |-->|     |     +-----+
              |      |     |   |     |---->|     |
              |      +-----+   +-----+     +-----+
              |      Marker3   Meter3      Absolute
              |                            Dropper3
              V etc.

      Figure 9: An Example of a Marking and Traffic Isolation TCB

miscellaneous classification criteria, to a granularity sufficient to
identify individual customer microflows. Each microflow can then be
marked for a specific DSCP The metering elements limit the contribution
of each of the customer's microflows to the service level for which it
was marked. Packets exceeding the allowable limit for the microflow are
dropped.

This TCB could be formally specified as follows:

      TCB1:
      Classifier1: (MF)
      FilterA:             Marker1
      FilterB:             Marker2
      FilterC:             Marker3
      etc.

      Marker1:
      Output:              Meter1

      Marker2:

      Output:              Meter2

      Marker3:
      Output:              Meter3

      Meter1:
      ConformingOutput:    Mux1.InputA
      NonConformingOutput: AbsoluteDropper1

      Meter2:
      ConformingOutput:    Mux1.InputB
      NonConformingOutput: AbsoluteDropper2

      Meter3:
      ConformingOutput:    Mux1.InputC
      NonConformingOutput: AbsoluteDropper3

      etc.

      Mux1:
      Output:              to TCB2

Note that the detailed traffic element declarations are not shown here.
Traffic is either dropped by TCB1 or emerges marked for one of two
DSCPs. This traffic is then passed to TCB2 which is illustrated in
Figure 10.

                     +-----+
                     |     |---------------> to Queue1
                  +->|     |     +-----+
        +-----+   |  |     |---->|     |
        |    A|---+  +-----+     +-----+
      ->|     |       Meter5     AbsoluteDropper4
        |    B|---+  +-----+
        +-----+   |  |     |---------------> to Queue2
      Classifier2 +->|     |     +-----+
         (BA)        |     |---->|     |
                     +-----+     +-----+
                      Meter6     AbsoluteDropper5

      Figure 10: Additional Example: TCB2

TCB2 could then be specified as follows:

      Classifier2: (BA)
      FilterA:               Meter5
      FilterB:               Meter6

      Meter5:
      ConformingOutput:      Queue1
      NonConformingOutput:   AbsoluteDropper4

      Meter6:
      ConformingOutput:      Queue2
      NonConformingOutput:   AbsoluteDropper5

8.4.

8.5.  Cascaded TCBs

Nothing in this model prevents more complex scenarios in which one
microflow TCB precedes another (e.g. for TCBs implementing separate TCSs
for the source and for a set of destinations).

9.  Security Considerations

Security vulnerabilities of Diffserv network operation are discussed in
[DSARCH]. This document describes an abstract functional model of
Diffserv router elements. Certain denial-of-service attacks such as
those resulting from resource starvation may be mitigated by appropriate
configuration of these router elements; for example, by rate limiting
certain traffic streams or by authenticating traffic marked for higher
quality-of-service.

One particular theft- or denial-of-service issue may arise where a
token-bucket meter, with an absolute dropper for non-conforming traffic,
is used in a TCB to police a stream to a given TCS: the definition of
the token-bucket meter in section 5 indicates that it should be lenient
in accepting a packet whenever any bits of the packet would have been
within the profile; the definition of the leaky-bucket scheduler is
conservative in that a packet is to be transmitted only if the whole
packet fits within the profile. This difference may be exploited by a
malicious scheduler either to obtain QoS treatment for more octets than
allowed in the TCS or to disrupt (perhaps only slightly) the QoS
guarantees promised to other traffic streams.

10.  Acknowledgments

Concepts, terminology, and text have been borrowed liberally from
[POLTERM], [DSMIB] as well as from other IETF work on MIBs and [DSPIB]. policy-
management.  We wish to thank the authors of some of those documents:
Fred Baker, Michael Fine, Keith McCloghrie, John Seligson, Kwok Chan and Chan,
Scott Hahn and Andrea Westerinen for their contributions.

This document has benefitted from the comments and suggestions of
several participants of the Diffserv working group. group, particularly John
Strassner and Walter Weiss.

11.  References

[AF-PHB]
     J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, "Assured
     Forwarding PHB Group", RFC 2597, June 1999.

[DSARCH]
     M. Carlson, W. Weiss, S. Blake, Z. Wang, D. Black, and E. Davies,
     "An Architecture for Differentiated Services", RFC 2475, December
     1998

[DSFIELD]
     K. Nichols, S. Blake, F. Baker, and D. Black, "Definition of the
     Differentiated Services Field (DS Field) in the IPv4 and IPv6
     Headers", RFC 2474, December 1998.

[DSMIB]
     F. Baker, A. Smith, K. Chan, "Differentiated Services MIB",
     Internet Draft <http://www.ietf.org/internet-drafts/draft-ietf-
     diffserv-mib-04.txt>, July 2000.

[DSPIB]
     M. Fine, K. McCloghrie, J. Seligson, K. Chan, S. Hahn, and A.

     Smith, "Quality of Service Policy Information Base", Internet Draft
     <draft-ietf-diffserv-pib-00.txt>, March 2000.

[DSTERMS]
     D. Grossman, "New Terminology for Diffserv", Internet Draft <draft-
     ietf-diffserv-new-terms-02.txt>,
     diffserv-mib-05.txt>, November 1999. 2000.

[E2E]
     Y. Bernet, R. Yavatkar, P. Ford, F. Baker, L. Zhang, M. Speer, K.
     Nichols, R. Braden, B. Davie, J. Wroclawski, and E. Felstaine,
     "Integrated Services Operation over Diffserv Networks", Internet
     Draft <http://www.ietf.org/internet-drafts/draft-ietf-issll-
     diffserv-rsvp-04.txt>, March 2000.

[EF-PHB]
     V. Jacobson,  K. Nichols, and K. Poduri, "An Expedited Forwarding
     PHB", RFC 2598, June 1999.

[FLOYD]
     S. Floyd, "General Load Sharing", 1993.

[GTC]
     L. Lin, J. Lo, and F. Ou, "A Generic Traffic Conditioner", Internet
     Draft <http://www.ietf.org/internet-drafts/draft-lin-diffserv-
     gtc-01.txt>, August 1999.

[INTSERV]
     R. Braden, D. Clark and S. Shenker, "Integrated Services in the
     Internet Architecture: an Overview" RFC 1633, June 1994.

[POLTERM]
     F. Reichmeyer,  D. Grossman, J. Strassner, M. Condell, "A Common
     Terminology for Policy Management",
     A. Westerinen et al., "Policy Terminology", Internet Draft
     <http://www.ietf.org/internet-drafts/draft-reichmeyer-polterm-
     <http://www.ietf.org/internet-drafts/draft-ietf-policy-

[QOSDEVMOD]
     J. Strassner, W. Weiss, D. Durham, A. Westerinen, B. Moore, "Information Model for
     Describing Network Device QoS Mechanisms", Internet Draft
     <http://www.ietf.org/internet-drafts/draft-ietf-policy-qos-device-

[QUEUEMGMT]
     B. Braden et al., "Recommendations on Queue Management and
     Congestion Avoidance in the Internet", RFC 2309, April 1998.

[SRTCM]
     J. Heinanen, and R. Guerin, "A Single Rate Three Color Marker", RFC
     2697, September 1999.

[TRTCM]
     J. Heinanen, R. Guerin, "A Two Rate Three Color Marker", RFC 2698,
     September 1999.

[VIC]
     McCanne, S. and Jacobson, V., "vic: A Flexible Framework for Packet
     Video", ACM Multimedia '95, November 1995, San Francisco, CA, pp.
     511-522.  <ftp://ftp.ee.lbl.gov/papers/vic-mm95.ps.Z>

[802.1D]
     "Information technology - Telecommunications and information
     exchange between systems - Local and metropolitan area networks -
     Common specifications - Part 3: Media Access Control (MAC) Bridges:
     Revision.  This is a revision revision of ISO/IEC 10038: 1993, 802.1j-1992
     and 802.6k-1992.  It incorporates P802.11c, P802.1p and P802.12e.",
     ISO/IEC 15802-3: 1998.

12.  Appendix A. Discussion of Token Buckets and Leaky Buckets

The concept used for rate-control in several architectures, including
ATM, Frame Relay, Integrated Services and Differentiated Services,
consists of "leaky buckets" and/or "token buckets".  Both of these are,
by definition, theoretical relationships between some defined
burst_size, rate and interval:

                       rate = burst_size/interval

Thus, a token bucket or leaky bucket might specify an information rate
of 1.2 Mbps with a burst size of 1500 bytes. In this case, the token
rate is 1,200,000 bits per second, the token burst is 12,000 bits and
the token interval is 10 milliseconds. The specification says that
conforming traffic will in the worst case come in 100 bursts per second
of 1500 bytes and at an average rate exceeding 1.2 Mbps.

A.1 Leaky Buckets

A leaky bucket algorithm is primarily used for shaping traffic as it
leaves an interface onto the network (handled under Queues and
Schedulers in this model). Traffic theoretically departs from an
interface at a rate of one bit every so many time units (in the example,
one bit every 0.83 microseconds) but, in fact, departs in multi-bit
units (packets) at a rate approximating the theoretical, as measured

over a longer interval. In the example, it might send one 1500 byte
packet every 10 ms or perhaps one 500 byte packet every 3.3 ms. It is
also possible to build multi-rate leaky buckets in which traffic departs
from the interface at varying rates depending on recent activity or
inactivity.

Implementations generally seek as constant a transmission rate as
achievable. In theory, a 10 Mbps shaped transmission stream from an
algorithmic implementation and a stream which is running at 10 Mbps
because its bottleneck link has been a 10 Mbps Ethernet link should be
indistinguishable. Depending on configuration, the approximation to
theoretical smoothness may vary by moving as much as an MTU from one
token interval to another. Traffic may also be jostled by other traffic
competing for the same transmission resources.

A.2 Token Buckets

A token bucket, on the other hand, measures the arrival rate of traffic
from another device.  This traffic may originally have been shaped using
a leaky bucket shaper or its equivalent. The token bucket determines
whether the traffic (still) conforms to the specification.  Multi-rate
token buckets (e.g. token buckets with both a peak rate and a mean rate,
and sometimes more) are commonly used, such as described in [SRTCM] and
[TRTCM]. In this case, absolute smoothness is not expected, but
conformance to one or more of the specified rates is.

Simplistically, a data stream is said to conform to a simple token
bucket parameterised by a {rate, burst_size} if the system receives in
any time interval, t, at most, an amount of data not exceeding (rate *
t) + burst_size.

For the multi-rate token bucket case, the data stream is said to conform
if, for each of the rates, the stream conforms to the token-bucket
profile appropriate for traffic of that class. For example, received
traffic that arrives pre-classified as one of the "excess" rates (e.g.
AF12 or AF13 traffic for a device implementing the AF1x PHB) is only
compared to the relevant "excess" token bucket profile.

A.3 Some consequences

When used as a leaky bucket shaper, the above definition interacts with
clock granularity in ways one might not expect. A leaky bucket releases
a packet only when all of its bits would have been allowed: it does not
borrow from future capacity. If the clock is very fine grain, on the
order of the bit rate or faster, this is not an issue. But if the clock
is relatively slow (and millisecond or multi-millisecond clocks are not
unusual in networking equipment), this can introduce jitter to the

shaped stream.

The fact that data is organized into variable length packets introduces
some uncertainty in the conformance decision made by a downstream Meter
that is attempting to determine conformance to a traffic profile.
Theoretically, in this case, a token bucket accepts a packet only if all
of ISO/IEC 10038: 1993, 802.1j-1992 its bits would have been accepted and 802.6k-1992.  It incorporates P802.11c, P802.1p does not borrow the required
excess capacity from future capacity - this is referred to as a "strict"
token bucket.  This is consistent with [SRTCM] and P802.12e.",
     ISO/IEC 15802-3: 1998.

12.  Appendix A. Simple Token Bucket Discussion [TRTCM]. In real-
world deployment, however, where MTUs are often larger than the burst
size offered by a link-layer network service provider and Definition TCP is more
commonly ACK-paced than shaped using a leaky bucket, a "loose" or
"lenient" token bucket definition that would accept a packet if any of
its bits were within a profile offers a solution to the practical
problems that may arise from use of a strict meter.

Internet Protocol (IP) packets are of variable-length but theoretical
token buckets operate using fixed-length time intervals or pieces of
data.  This leaves an implementor of a token bucket scheme with a
dilemma. When the amount of bandwidth tokens, TB, left in the token
bucket is positive but less than the size of the packet being operated
on, one of three things can be done:

 (1)   The whole size of the packet can be substracted from the bucket,
       leaving it negative, remembering that the token bucket size must
       be added to TB rather than simply setting it "full". This
       potentially puts more than the token bucket size into this token
       bucket interval and less into the next. It does, however, make
       the average amount accepted per token bucket interval equal to
       the token burst. This approach accepts traffic if any bit in the
       packet would be accepted and borrows up to one MTU of capacity
       from one or more subsequent intervals when necessary. Such a
       token bucket implementation is said to be a "loose" token bucket.

 (2)   Alternatively, the amount can be left unchanged (and maybe an
       attempt could be made to accept the packet under another
       threshold in another bucket), remembering that the token bucket
       size must be added to the TB variable rather than simply setting
       it "full". This potentially puts less than the token bucket size
       into this token bucket interval and more into the next. Like the
       first option, it makes the average amount accepted per token
       bucket interval equal to the token burst.  This approach accepts
       traffic if every bit in the packet would be accepted and borrows
       up to one MTU of capacity from one or more previous intervals
       when necessary. Such a token bucket implementation is said to be
       a "strict" (or perhaps "stricter") token bucket.

 (3)   The TB variable can be set to zero to account for the first part
       of the packet and the remainder of the packet size can be taken
       out of the next-colored bucket. This, of course, has another bug:
       the same packet cannot have both conforming and nonconforming
       components in the Diffserv architecture and so is not really
       appropriate here.

Unfortunately, the thing that cannot be done is exactly to fit the token
burst specification with random sized packets: therefore token buckets
in a variable length packet environment always have a some variance from
theoretical reality. This has also been observed in the ATM Guaranteed
Frame Rate (GFR) service category specification and Frame Relay.

Some find the behavior of a "loose" token bucket unacceptable, as it is
significantly different than the token bucket description for ATM and
for Frame Relay.  However, the "strict" token bucket approach has three
characteristics which are important to keep in mind:

 (1)   First, if the maximum token burst is smaller than the MTU, it is
       possible that traffic never matches the specification. This may
       be avoided by not allowing such a specification.

 (2)   Second, the strict token bucket specifications [SRTCM] and
       [TRTCM], as specified, are subject to a persistent under-run.
       These accumulate burst capacity over time, up to the maximum
       burst size. Suppose that the maximum burst size is exactly the
       size of the packets being sent - which one might call the
       "strictest" token bucket implementation. In such a case, when one
       packet has been accepted, the token depth becomes zero, and
       starts to accumulate.  If the next packet is received any time
       earlier than a token interval later, it will not be accepted. If
       the next packet arrives exactly on time, it will be accepted and
       the token depth again set to zero. If it arrives later, however,
       the token depth will stop accumulating, as it is capped by the
       maximum burst size, and tokens that would have accumulated
       between the end of that token interval and the actual arrival of
       the packet are lost. As a result, natural jitter in the network
       conspires against the algorithm to reduce the actual acceptance
       rate. Overcoming this error requires the maximum token bucket
       size to be significantly greater than the MTU.

 (3)   Third, operationally, a strict token bucket is reasonable for
       traffic which has been shaped by a leaky bucket shaper or a
       serial line. However, traffic in the Internet is rarely shaped in
       that way.  TCP applies no shaping to its traffic, but rather
       depends on longer-range Ack Clocking ACK-clocking behavior to help it
       approximate a certain rate and explicitly sends traffic bursts
       during slow start, retransmission and fast recovery. Video-on-IP
       implementations such as [VIC] may have a leaky bucket shaper
       available to them, but often do not, and simply enqueue the
       output of their codec for transmission on the appropriate
       interface. As a result, in each of these cases, a strict shaper
       may reject traffic in the short term (single token interval)
       which it would have accepted if it had a longer time in view and
       which it needs to accept for the application to work properly. To
       work around this, the token interval must approximate or exceed
       the RTT of the session or sessions in question and the burst size
       must accommodate the largest burst that the originator might
       send.

       A.4 Mathematics

The behavior defined in [SRTCM] and [TRTCM] is not mandatory for
compliance, but we give here a mathematical definition of two- parameter
token bucket operation which is consistent with those documents and
which can be used to define a shaping profile.

Define a token bucket with bucket size BS, token accumulation rate R and
instantaneous token occupancy T(t). Assume that T(0) = BS.

Then after an arbitrary interval with no packet arrivals, T(t) will not
change since the bucket is already full of tokens. Assume a packet of
size B bytes at time t'. The bucket capacity T(t'-) = BS still. Then, as
long as B <= BS, the packet conforms to the meter, and

                            T(t') = BS - B.

Assume an interval v = t - t' elapses before the next packet, of size C
<= BS, arrives. T(t-) is given by the following equation:

                    T(t-) = min { BS, T(t') + v*R }

maximum of BS tokens).

If T(t-) - C = 0, the packet conforms and T(t) = T(t-) - C.  Otherwise,
the packet does not conform and T(t) = T(t-).

This function can be used to define a shaping profile. If a packet of
size C arrives at time t, it will be eligible for transmission at time
te given as follows (we still assume C <= BS):

                           te = max { t, t" }

where t" = (C - T(t') + t'*R)/R, T(t") = C, the time when C credits have
accumulated in the bucket, and when the packet would conform if the
token bucket were a meter. te != t" only if t != > t".

13.  Authors' Addresses

   Yoram Bernet
   Microsoft
   One Microsoft Way
   Redmond, WA  98052
   Phone:  +1 425 936 9568
   E-mail: yoramb@microsoft.com

   Andrew Smith
   FAX:  +1 414 345 1827
   E-mail: ah_smith@pacbell.net

   Steven Blake
   Ericsson
   920 Main Campus Drive, Suite 500
   Raleigh, NC  27606
   Phone:  +1 919 472 9913
   E-mail: slblake@torrentnet.com

   Daniel Grossman
   Motorola Inc.
   20 Cabot Blvd.
   Mansfield, MA  02048
   Phone:  +1 508 261 5312
   E-mail: dan@dma.isg.mot.com

   Andrew Smith (editor)
   Allegro Networks
   6399 San Ignacio Ave.
   San Jose, CA 95119
   FAX:  +1 415 345 1827
   E-mail: andrew@allegronetworks.com

Table of Contents

1 Introduction ....................................................    2
2 Glossary ........................................................    3    4
3 Conceptual Model ................................................    5    6
3.1 Elements Components of a Diffserv Router .................................    5 ...............................    6
3.1.1 Datapath ....................................................    6
3.1.2 Configuration and Management Interface ......................    6    8
3.1.3 Optional QoS Agent Module ...................................    7    8
3.2 Diffserv Functions at Ingress and Egress ......................    9

3.3 Shaping and Policing ..........................................   10
3.4 Hierarchical Model View of Diffserv Components .....................    8 the Model ................................   11
4 Classifiers .....................................................   10   11
4.1 Definition ....................................................   10   11
4.1.1 Filters .....................................................   11   13
4.1.2 Overlapping Filters .........................................   12   13
4.2 Examples ......................................................   13   15
4.2.1 Behaviour Aggregate (BA) Classifier .........................   13   15
4.2.2 Multi-Field (MF) Classifier .................................   14   15
4.2.3 Free-form Classifier ........................................   14   16
4.2.4 Other Possible Classifiers ..................................   15   16
5 Meters ..........................................................   15
5.1 Token-Bucket Model ............................................   17
5.2
5.1 Examples ......................................................   18
5.2.1
5.1.1 Average Rate Meter ..........................................   18
5.2.2
5.1.2 Exponential Weighted Moving Average (EWMA) Meter ............   19
5.2.3
5.1.3 Two-Parameter Token Bucket Meter ............................   20
5.2.4   19
5.1.4 Multi-Stage Token Bucket Meter ..............................   20
5.2.5
5.1.5 Null Meter ..................................................   21
6 Action Elements .................................................   21
6.1 DSCP Marker ........................................................ ...................................................   22
6.2 Absolute Dropper ..............................................   22
6.3 Multiplexor ...................................................   23   22
6.4 Counter .......................................................   23
6.5 Null Action ...................................................   23
7 Queueing Elements ...............................................   24   23
7.1 Queueing Model ................................................   24
7.1.1 FIFO Queue ..................................................   25   24
7.1.2 Scheduler ...................................................   26   25
7.1.3 Algorithmic Dropper .........................................   28
7.1.4 Constructing queueing blocks from the elements ..............   31   27
7.2 Sharing load among traffic streams using queueing .............   32   31
7.2.1 Load Sharing ................................................   32   31
7.2.2 Traffic Priority ............................................   33   32
8 Traffic Conditioning Blocks (TCBs) ..............................   34   32
8.1 TCB ...........................................................   33
8.1.1 Building blocks for Queueing ................................   34
8.2 An Example TCB ................................................   35
8.2   34
8.3 An Example TCB to Support Multiple Customers ..................   40
8.3   39
8.4 TCBs Supporting Microflow-based Services ......................   41
8.4
8.5 Cascaded TCBs .................................................   44
9 Security Considerations .........................................   44   45
10 Acknowledgments ................................................   45
11 References .....................................................   45
12 Appendix A. Simple Token Bucket Discussion of Token Buckets and Definition Leaky Buckets ......   47
13 Authors' Addresses .............................................   50   52

14.  Full Copyright

   Copyright (C) The Internet Society (2000). All Rights Reserved.

   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implmentation may be prepared, copied, published and
   distributed, in whole or in part, without restriction of any kind,
   provided that the above copyright notice and this paragraph are
   included on all such copies and derivative works. However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than
   English.

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an
   "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
   TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
   BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
   HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.