Network Working Group                                     B. Constantine
Internet-Draft                                                      JDSU
Intended status: Informational                                 G. Forget
Expires: March 24, May 14, 2011                      Bell Canada (Ext. Consultant)
                                                            Rudiger Geib
                                                        Deutsche Telekom
                                                        Reinhard Schrage
                                                      Schrage Consulting
                                                      September 24,

                                                       November 14, 2010

                  Framework for TCP Throughput Testing
                draft-ietf-ippm-tcp-throughput-tm-07.txt
                draft-ietf-ippm-tcp-throughput-tm-08.txt

Abstract

   This document framework describes a framework methodology for measuring sustained end-to-end TCP
   throughput performance in an end-to-end a managed network environment.
   This document IP network. The intention is intended to
   provide a practical methodology to help
   users validate the TCP layer performance of a managed network, which
   should performance.
   The goal is to provide a better indication of end-user the user experience.
   In the this framework, various TCP and network IP parameters are identified that and
   should be tested as part of the a managed IP network verification at the TCP
   layer. verification.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119].

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

           This Internet-Draft will expire on March 24, May 14, 2011.

   Copyright Notice

   Copyright (c) 2010 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
     1.1   Test Set-up and Terminology  . . . . . . . . . . . . . . .  4
   2.  Scope and Goals of this methodology. . . . . . . . . . . . . .  5
     2.1   TCP Equilibrium State Throughput . . . . . Equilibrium. . . . . . . . .  6
     2.2   Metrics for TCP Throughput Tests . . . . . . . . . . . . .  7  6
   3.  TCP Throughput Testing Methodology . . . . . . . . . . . . . .  9  7
     3.1   Determine Network Path MTU . . . . . . . . . . . . . . . . 11  9
     3.2.  Baseline Round Trip Time and Bandwidth . . . . . . . . . . 13 10
         3.2.1  Techniques to Measure Round Trip Time . . . . . . . . 13 10
         3.2.2  Techniques to Measure End-end Bandwidth . . end-to-end Bandwidth. . . . . . 14 11
     3.3.  TCP Throughput Tests . . . . . . . . . . . . . . . . . . . 14 12
         3.3.1 Calculate Optimum Ideal TCP Receive Window Size. . . . . . . . 12
         3.3.2 Metrics for TCP Throughput Tests . . . . . . . . . . . 15
         3.3.2
         3.3.3 Conducting the TCP Throughput Tests. . . . . . . . . . 17
         3.3.3 18
         3.3.4 Single vs. Multiple TCP Connection Testing . . . . . . 18
         3.3.4 19
         3.3.5 Interpretation of the TCP Throughput Results . . . . . 19 20
     3.4. Traffic Management Tests .  . . . . . . . . . . . . . . . . 19 20
         3.4.1 Traffic Shaping Tests. . . . . . . . . . . . . . . . . 20 21
          3.4.1.1 Interpretation of Traffic Shaping Test Results. . . 20 21
         3.4.2 RED Tests. . . . . . . . . . . . . . . . . . . . . . . 21 22
          3.4.2.1 Interpretation of RED Results . . . . . . . . . . . 21 23
   4.  Security Considerations  . . . . . . . . . . . . . . . . . . . 22 23
   5.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 22
     5.1.  Registry Specification . . . . . . . . . . . . . . . . . . 22
     5.2.  Registry Contents  . . . . . . . . . . . . . . . . . . . . 22 23
   6.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 22 23
   7.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 22 24
     7.1   Normative References . . . . . . . . . . . . . . . . . . . 22 24
     7.2   Informative References . . . . . . . . . . . . . . . . . . 23 24

   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 25

1. Introduction

   Network providers are coming to the realization that Layer 2/3
   testing and TCP layer testing are required is not enough to more adequately ensure
   end-user end-user's satisfaction.  Testing an operational network prior to
   customer activation is referred to as "turn-up" testing and the
   An SLA (Service Level Agreement) is provided to business customers
   and is generally based upon Layer 2/3
   information criteria such as access rate,
   latency, packet delay, loss and delay variation.  Therefore, variations.  On the other hand,
   measuring TCP throughput provides meaningful results with respect to
   user experience.  Thus, the network provider community desires to
   measure IP network throughput performance at the TCP layer.  Measuring TCP throughput provides a
   meaningful measure with respect to the end user experience (and
   ultimately reach some level of TCP testing interoperability which
   does not exist today).

   Additionally, end-users (business enterprises) business enterprise customers seek to conduct
   repeatable TCP throughput tests between enterprise locations.  Since these
   enterprises rely on the networks of the providers, a common test
   methodology (and metrics) would be equally beneficial to both with predefined metrics will benefit both parties.

   Note that the primary focus of this methodology is managed business
   class IP networks; i.e. those Ethernet terminated services for which
   businesses are provided an SLA from the network provider.  End-users
   with "best effort" access between locations can use this methodology,
   but this framework and its metrics are intended to be used in a
   predictable managed IP service environment.

   So the intent behind this TCP throughput methodology document is to define a methodology for
   testing sustained TCP layer performance.  In this document, sustained the
   maximum achievable TCP throughput is that amount of data per unit
   time that TCP transports during equilibrium (steady state), when trying to reach Equilibrium, i.e.
   after the initial slow start phase. and congestion avoidance phases.  We
   refer to this state as the maximum achievable TCP
   Equilibrium, Throughput for the TCP
   connection(s).

   TCP uses a congestion window, (TCP CWND), to determine how many
   packets it can send at one time. A larger TCP CWND permits a higher
   throughput.  TCP "slow start" and "congestion avoidance" algorithms
   together determine the TCP CWND size.  The Maximum TCP CWND size is
   also tributary to the buffer space allocated by the kernel for each
   socket.  For each socket, there is a default buffer size that can be
   changed by the equilibrium throughput program using a system library called just before
   opening the socket.  There is also a kernel enforced maximum buffer
   size.  This buffer size can be adjusted at both ends of the socket
   (send and receive).  In order to obtain the maximum
   achievable for throughput, it
   is critical to use optimal TCP Send and Receive Socket Buffer sizes
   as well as the optimal TCP connection(s). Receive Window size.

   There are many variables to consider when conducting a TCP throughput
   test and this methodology focuses on some of the most common
   parameters that MUST be considered such as: common:
   - Path MTU and Maximum Segment Size (MSS)
   - RTT and Bottleneck BW
   - Ideal TCP Receive Window (Bandwidth Delay Product) (including Ideal Receive Socket Buffer)
   - Ideal Send Socket Buffer
   - TCP Congestion Window (TCP CWND)
   - Single Connection and Multiple Connection Connections testing
   This methodology proposes a test which SHOULD TCP testing that should be performed in
   addition to traditional Layer 2/3 type tests, which tests.  Layer 2/3 tests are conducted
   required to verify the integrity of the network before conducting TCP tests.
   test.  Examples include iperf (UDP mode) or manual packet layer test
   techniques where packet throughput, loss, and delay measurements are
   conducted.  When available, standardized testing similar to RFC 2544
   [RFC2544] but adapted for use on in operational networks may be used
   (because used.
   Note: RFC 2544 methods are not intended for use was never meant to be used outside the a lab
   environment). environment.

1.1 Test Set-up and Terminology

   This section provides a general overview of the test configuration
   for this methodology.  The test is intended to be conducted on an
   end-end
   end-to-end operational network, so there are multitudes and managed IP network.  A multitude of
   network architectures and topologies that can be tested.  This test  The following
   set-up diagram is very general and the main intent is to illustrate it only illustrates the
   segmentation of the within end user and network provider domains.

   Common terminologies used in the test methodology are:

   - Bottleneck Bandwidth (BB), lowest bandwidth along the complete
     path. Bottleneck Bandwidth and Bandwidth are used synonymously
     in this document. Most of the time the Bottleneck Bandwidth is
     in the access portion of the wide area network (CE - PE)
   - Customer Provided Equipment (CPE), refers to customer owned
     equipment (routers, switches, computers, etc.)
   - Customer Edge (CE), refers to provider owned demarcation device device.
   - End-user: The business enterprise customer.  For the purposes of
     conducting TCP throughput tests, this may be the IT department.
   - Network Under Test (NUT), refers to the tested IP network path.
   - Provider Edge (PE), refers to provider located provider's distribution
     equipment equipment.
   - P (Provider), refers to provider core network equipment
   - Bottleneck Bandwidth*, lowest bandwidth along the complete network
     path equipment.
   - Round-Trip Time (RTT), refers to Layer 4 back and forth delay delay.
   - Round-Trip Delay (RTD), refers to Layer 1 back and forth delay
   - Network Under Test (NUT), refers to the tested IP network path delay.
   - TCP Throughput Test Device (TCP TTD), refers to compliant TCP
     host that generates traffic and measures metrics as defined in
     this methodology methodology. i.e. a dedicated communications test instrument.

 +----+ +----+ +----+  +----+ +---+  +---+ +----+  +----+ +----+ +----+
 |    | |    | |    |  |    | |   |  |   | |    |  |    | |    | |    |
 | TCP|-| CPE|-| CE |--| PE |-| P |--| P |-| PE |--| CE |-| CPE|-| TCP|
 | TD | TTD| |    | |    |BB|    | |   |  |   | |    |BB|    | |    | | TD | TTD|
 +----+ +----+ +----+  +----+ +----+**+----+ +---+  +---+ +----+**+----+ +----+  +----+ +----+ +----+
        <------------------------ NUT ------------------------>

         <-------------------------RTT ------------------------>

*  Bottleneck Bandwidth and Bandwidth are used synonomously in this
   document.
** Most of the time the Bottleneck Bandwidth is in the access portion
   of the wide area network (CE - PE)
    R >-----------------------------------------------------------|
    T                                                             |
    T <-----------------------------------------------------------|

   Note that the NUT may consist of a variety of devices including (and
   NOT but
   not limited to): to, load balancers, proxy servers, servers or WAN acceleration
   devices.  The detailed topology of the NUT MUST should be considered well understood
   when conducting the TCP throughput tests, but although this methodology
   makes no attempt to characterize TCP performance related to specific network architectures.

2. Scope and Goals of this Methodology

   Before defining the goals of this methodology, goals, it is important to clearly define the
   areas that are out-of-scope for this
   methodology. out-of-scope.

   - The This methodology is not intended to predict the TCP throughput
   behavior
   during the transient stages of a TCP connection, such as the initial
   slow start.

   - The This methodology is not intended to definitively benchmark TCP
   implementations of one OS to another, although some users MAY may find
   some value in conducting qualitative experiments.

   - The This methodology is not intended to provide detailed diagnosis
   of problems within end-points or within the network itself as
   related to non-optimal TCP performance, although a results
   interpretation section for each test step MAY may provide insight into in
   regards with potential
   issues within the network. issues.

   - The This methodology does not propose a method to operate permanently with high
   measurement loads.  TCP performance and optimization data of within
   operational networks MAY may be captured and evaluated by using data of
   from the "TCP Extended Statistics MIB" [RFC4898].

   - The This methodology is not intended to measure TCP throughput as part
   of an SLA, or to compare the TCP performance between service
   providers or to compare between implementations of this methodology
   (test equipment).
   in dedicated communications test instruments.

   In contrast to the above exclusions, the goals of this methodology
   are a primary goal is to define a
   method to conduct a structured, practical, end-to-end assessment of sustained
   TCP performance within a managed business class IP network.  A  Another
   key goal is to establish a set of "best practices" that an engineer SHOULD a non-TCP
   expert should apply when validating the ability of a managed network
   to carry end-user TCP applications.

   The

   Other specific goals are to: to :

   - Provide a practical test approach that specifies well understood,
   end-user IP hosts
   configurable TCP parameters such as TCP Receive Window size, Socket
   Buffer size, MSS (Maximum Segment Size), number of connections, and
   how these affect the outcome of TCP performance over a network.
   See section 3.3.3.

   - Provide specific test conditions (link like link speed, RTT, TCP Receive
   Window size,
   etc.) Socket Buffer size and maximum achievable TCP throughput under TCP Equilibrium
   conditions.
   when trying to reach TCP Equilibrium.  For guideline purposes,
   provide examples of these test conditions and the their maximum achievable
   TCP throughput during the
   equilibrium state. throughput.  Section 2.1 provides specific details concerning the
   definition of TCP Equilibrium within the context of this
   methodology. methodology while section 3
   provides specific test conditions with examples.

   - Define three (3) basic metrics that can be used to compare the performance of TCP
   connections under various network conditions.  See section 3.3.2.

   - In test situations where the RECOMMENDED recommended procedure does not yield
   the maximum achievable TCP throughput result, results, this methodology
   provides some possible areas within the end host or network that
   SHOULD
   should be considered for investigation (although investigation.  Although again, this
   methodology is not intended to provide a detailed diagnosis of on these
   issues).
   issues.  See section 3.3.5.

2.1 TCP Equilibrium State Throughput

   TCP connections have three (3) fundamental congestion window phases
   as documented in [RFC5681].

   These 3 phases are:
   1 - The Slow Start, Start phase, which occurs during at the beginning of a TCP
   transmission or after a retransmission time out event. out.

   2 - The Congestion avoidance, which is the phase Avoidance phase, during which TCP ramps up to
   establish the maximum attainable throughput on an end-end end-to-end network
   path.  Retransmissions are a natural by-product of the TCP congestion
   avoidance algorithm as it seeks to achieve maximum throughput on
   the network path. throughput.

   3 - The Retransmission Time-out phase, which could include Fast
   Retransmit (Tahoe) and or Fast Recovery (Reno and & New Reno). When a multiple
   packet is lost, the lost occurs, Congestion avoidance Avoidance phase transitions to a Fast
   Retransmission or Fast Recovery Phase dependent depending upon the TCP implementation. implementations.
   If a Time-Out occurs, TCP transitions back to the Slow Start phase.

   The following diagram depicts these 3 phases.

            |        ssthresh              Trying to reach TCP      |           |
   Through- |           | Equilibrium
   put >>>>>>>>>>>>>
        /\  |           |\      /\/\/\/\/\  Retransmit          /\/\ ... High ssthresh     TCP CWND    3
        /\  | Loss Event *      halving     Retransmission
        /\  |            * \    /         |  Time-out           /
            |    upon loss   Time-Out         Adjusted
        /\  |            *  \  /          |    /\        _______          _/
            |  Slow   _/    |/           ssthresh
        /\  |            *   \  /  \      /M-Loss | Slow  _/         *
   TCP      | Start _/      Congestion   |/        |Start_/   Congestion            * 2  \/    \    / Events |1       *
   Through- |     _/         Avoidance   Loss            * Congestion\  /         |Slow    *
   put      |   _/ 1         *  Avoidance  \/          |Start  *
            |   _/                       Event Slow     *            Half          | _/     *
            | _/                                   |/
            |/__________________________________________________________ Start  *              TCP CWND       *
            |___*_______________________Minimum TCP CWND after Time-Out_
                           Time >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
   Note : ssthresh = Slow Start threshold.

   Through the above 3 phases, TCP is trying to reach Equilibrium, but
   since packet loss is currently its only available feedback indicator,
   TCP will never reach that goal.  Although, a well tuned (and managed)
   IP network with well tuned IP hosts and applications should perform
   very close to TCP Equilibrium and to the BB (Bottleneck Bandwidth).

   This TCP methodology provides guidelines to measure the equilibrium maximum
   achievable TCP throughput which refers to the or maximum TCP sustained rate obtained by
   congestion avoidance before packet loss conditions occur (which MAY
   cause the state change from congestion avoidance
   after TCP CWND has stabilized to a retransmission
   phase). an optimal value.  All maximum
   achievable TCP throughputs specified in Section section 3 are with respect to
   this equilibrium state.

2.2 Metrics for condition.

   It is important to clarify the interaction between the sender's Send
   Socket Buffer and the receiver's advertised TCP Throughput Tests

   This framework focuses on a Receive Window.  TCP throughput methodology and also
   provides several basic metrics
   test programs such as iperf, ttcp, etc. allow the sender to compare results of various
   throughput tests.  It is recognized that control
   the complexity quantity of TCP Bytes transmitted and
   unpredictability unacknowledged (in-flight),
   commonly referred to as the Send Socket Buffer.   This is done
   independently of the TCP makes it impossible Receive Window size advertised by the
   receiver.  Implications to develop a complete
   set the capabilities of metrics that account for the myriad Throughput Test
   Device (TTD) are covered at the end of variables (i.e. RTT
   variation, loss conditions, TCP implementation, etc.).  However,
   these basic metrics will facilitate section 3.

3. TCP throughput comparisons
   under varying network conditions and between network traffic
   management techniques.

   The first metric Throughput Testing Methodology

   As stated earlier in section 1, it is considered best practice to
   verify the TCP Transfer Time, which is simply integrity of the
   measured time network by conducting Layer2/3 tests such
   as [RFC2544] or other methods of network stress tests.  Although, it takes
   is important to transfer mention here that RFC 2544 was never meant to be used
   outside a block lab environment.

   If the network is not performing properly in terms of data across
   simultaneous packet loss,
   jitter, etc. then the TCP connections.  The concept is useful when
   benchmarking traffic management techniques, where multiple
   connections MAY layer testing will not be REQUIRED.

   The meaningful. A
   dysfunctional network will not reach close enough to TCP Transfer time MAY also be used Equilibrium
   to provide a normalized ratio
   of the actual optimal TCP Transfer Time versus Ideal Transfer Time.  This
   ratio is called throughputs with the available bandwidth.

   TCP Transfer Index Throughput testing may require cooperation between the end user
   customer and is defined as:

                     Actual TCP Transfer Time
                    -------------------------
                     Ideal TCP Transfer Time

   The Ideal TCP Transfer time is derived from the network path
   bottleneck bandwidth provider.  In a Layer 2/3 VPN architecture,
   the testing should be conducted either on the CPE or on the CE device
   and not on the various PE (Provider Edge) router.

   The following represents the sequential order of steps for this
   testing methodology:

   1. Identify the Path MTU.  Packetization Layer 1/2/3 overheads associated
   with Path MTU Discovery
   or PLPMTUD, [RFC4821], MUST be conducted to verify the network path
   MTU.  Conducting PLPMTUD establishes the upper limit for the MSS to
   be used in subsequent steps.

   2. Baseline Round Trip Time and Bandwidth. This step establishes the
   inherent, non-congested Round Trip Time (RTT) and the bottleneck
   bandwidth of the end-to-end network path.  Additionally,  These measurements are
   used to provide estimates of the ideal TCP Receive Window size must and Send
   Socket Buffer sizes that SHOULD be
   tuned used in subsequent test steps.
   These measurements reference [RFC2681] and [RFC4898] to equal measure RTD
   and the bandwidth delay product (BDP) as described in
   Section 3.3.1.

   The following table illustrates a associated RTT.

   3. TCP Connection Throughput Tests.  With baseline measurements
   of Round Trip Time and bottleneck bandwidth, single connection and multiple TCP Transfer
   connection throughput tests SHOULD be conducted to baseline network
   performance expectations.

   4. Traffic Management Tests.  Various traffic management and
   the Ideal queuing
   techniques can be tested in this step, using multiple TCP Transfer time
   connections.  Multiple connections testing should verify that the
   network is configured properly for a 100 MB file with traffic shaping versus policing,
   various queuing implementations and RED.

   Important to note are some of the ideal TCP
   window size based on key characteristics and
   considerations for the BDP.

   Table 2.2: Link Speed, RTT, TCP Throughput, Ideal TCP Transfer time

   Link                 Maximum Achievable     Ideal TCP Transfer time
   Speed     RTT (ms)   TCP Throughput(Mbps)   Time in seconds
   --------------------------------------------------------------------
    T1          20              1.17                  684.93
    T1          50              1.40                  570.61
    T1         100              1.40                  570.61
    T3          10             42.05                   19.03
    T3          15             42.05                   19.03
    T3          25             41.52                   18.82
    T3(ATM)     10             36.50                   21.92
    T3(ATM)     15             36.23                   22.14
    T3(ATM)     25             36.27                   22.05
    100M         1             91.98                    8.70
    100M         2             93.44                    8.56
    100M         5             93.44                    8.56
    1Gig       0.1            919.82                    0.87
    1Gig       0.5            934.47                    0.86
    1Gig         1            934.47                    0.86
    10Gig      0.05         9,344.67                    0.09
    10Gig      0.3          9,344.67                    0.09

    *   Calculation is based on File Size in Bytes X 8 / test instrument.  The test host may be a
   standard computer or a dedicated communications test instrument.
   In both cases, they must be capable of emulating both client and
   server.

   The following criteria should be considered when selecting whether
   the TCP Throughput.
    ** test host can be a standard computer or has to be a dedicated
   communications test instrument:

   - TCP Throughput is derived from Table 3.3.

   To illustrate implementation used by the test host, OS version, i.e. Linux OS
   kernel using TCP Transfer Time Index, an example would Reno, TCP options supported, etc.  These will
   obviously be more important when using dedicated communications test
   instruments where the
   bulk transfer of 100 MB over 5 simultaneous TCP connections  (each
   connection uploading 100 MB).  In this example, the Ethernet service
   provides a Committed Access Rate (CAR) of 500 Mbit/s.  Each
   connection MAY achieve different throughputs during implementation may be customized or tuned
   to run in higher performance hardware.  When a test and the
   overall throughput rate compliant TCP TTD is not always easy to determine (especially
   as
   used, the number of connections increases).

   The ideal TCP Transfer Time would implementation MUST be ~8 seconds, but identified in this example, the actual TCP Transfer Time was 12 seconds. test results.
   The compliant TCP Transfer Index
   would TTD should be 12/8 = 1.5, which indicates that the transfer across all
   connections took 1.5 times longer than the ideal.

   The second metric is the TCP Efficiency metric which is the
   percentage of bytes that were not retransmitted usable for complete end-to-end
   testing through network security elements and is defined as:

                Transmitted Bytes should also be usable
   for testing network sections.

   - Retransmitted Bytes
                ---------------------------------------  x 100
                          Transmitted Bytes

   Transmitted bytes are More important, the total number of TCP payload bytes to test host MUST be
   transmitted which includes the original and retransmitted bytes. This
   metric provides a comparative measure between various QoS mechanisms
   such as traffic management, congestion avoidance, capable to generate
   and also various receive stateful TCP implementations (i.e. Reno, Vegas, etc.).

   As an example, if 100,000 bytes were sent and 2,000 had to be
   retransmitted, test traffic at the full link speed of the
   network under test. Stateful TCP Efficiency SHOULD be calculated as:

                   102,000 - 2,000
                   ----------------  x 100 = 98.03%
                        102,000

   Note test traffic means that the retransmitted bytes MAY have occurred more than once,
   and these multiple retransmissions are added to the Retransmitted
   Bytes count (and the Transmitted Bytes count).

   And the third metric test
   host MUST fully implement a TCP stack; this is the Buffer Delay Percentage, generally a comment
   aimed at dedicated communications test equipments which represents
   the increase in RTT during sometimes
   "blast" packets with TCP headers. As a general rule of thumb, testing
   TCP throughput at rates greater than 100 Mbit/sec MAY require high
   performance server hardware or dedicated hardware based test from the inherent
   network RTT (baseline RTT). tools.

   - A compliant TCP Throughput Test Device MUST allow adjusting both
   Send Socket Buffer and TCP Receive Window sizes.  The baseline RTT is the round-trip time
   inherent Receive Socket
   Buffer MUST be large enough to accommodate the network path under non-congested conditions.

   The Buffer Delay Percentage is defined as:

              Average RTT during Transfer TCP Receive Window.

   - Baseline RTT
              ------------------------------------------ x 100
                             Baseline RTT

   As an example, the baseline Measuring RTT for the network path is 25 msec.
   During the course of and retransmissions per connection will generally
   require a TCP transfer, the average RTT across the
   entire transfer increased to 32 msec. dedicated communications test instrument. In this example, the Buffer
   Delay Percentage WOULD be calculated as:

                          32 - 25
                          ------- x 100 = 28%
                             25

   Note that the TCP Transfer Time, TCP Efficiency, and Buffer Delay
   metrics MUST be measured during each throughput test.
   Poor TCP Transfer Time Indexes (TCP Transfer Time greater than Ideal
   TCP Transfer Times) MAY absence of
   dedicated hardware based test tools, these measurements may need to
   be diagnosed by correlating conducted with sub-optimal
   TCP Efficiency and/or Buffer Delay Percentage metrics.

3. packet capture tools, i.e. conduct TCP Throughput Testing Methodology

   As stated in Section 1, it is considered best practice to verify
   the integrity of the network by conducting Layer2/3 stress throughput
   tests
   such as [RFC2544] or other methods of network stress tests.  If the
   network is not performing properly and analyze RTT and retransmission results in terms of packet loss, jitter,
   etc. then the TCP layer testing will not be meaningful since the
   equilibrium throughput MAY captures.
   Another option may be very difficult to achieve (in a
   "dysfunctional" network).

   TCP Throughput testing MAY require cooperation between the end user
   customer and the network provider.  In a Layer 2/3 VPN architecture,
   the testing use "TCP Extended Statistics MIB" per
   [RFC4898].

   - The RFC4821 PLPMTUD test SHOULD be conducted on the Customer Edge (CE) router and
   not with a dedicated
   tester which exposes the Provider Edge (PE) router.

   The following represents the sequential order of steps ability to conduct run the
   TCP throughput testing methodology:

   1. Identify PLPMTUD algorithm
   independent from the OS stack.

3.1. Determine Network Path MTU.  Packetization Layer MTU

   TCP implementations should use Path MTU Discovery
   or PLPMTUD, [RFC4821], MUST be conducted techniques (PMTUD).
   PMTUD relies on ICMP 'need to verify frag' messages to learn the maximum
   network path MTU.  Conducting PLPMTUD establishes the upper limit for
   the MSS
   When a device has a packet to be used send which has the Don't Fragment (DF)
   bit in subsequent steps.

   2. Baseline Round Trip Time and Bandwidth. This step establishes the
   inherent, non-congested Round Trip Time (RTT) IP header set and the bottleneck
   bandwidth of packet is larger than the end-end network path.  These measurements are used
   to provide estimates Maximum
   Transmission Unit (MTU) of the ideal TCP window size, which SHOULD be
   used in subsequent test steps.  These measurements reference
   [RFC2681] next hop, the packet is dropped and [RFC4898]
   the device sends an ICMP 'need to frag' message back to measure RTD (and the associated RTT).
   Also, [RFC5136] is referenced host that
   originated the packet. The ICMP 'need to measure network capacity.

   3. TCP Connection Throughput Tests.  With baseline measurements
   of Round Trip Time and bottleneck bandwidth, a series of single and
   multiple frag' message includes
   the next hop MTU which PMTUD uses to tune the TCP connection throughput tests SHOULD Maximum Segment
   Size (MSS). Unfortunately, because many network managers completely
   disable ICMP, this technique does not always prove reliable.

   Packetization Layer Path MTU Discovery or PLPMTUD [RFC4821] MUST then
   be conducted to
   baseline verify the network performance expectations.

   4. Traffic Management Tests.  Various traffic management and queuing
   techniques SHOULD path MTU.  PLPMTUD can be tested in this step, used
   with or without ICMP. The following sections provide a summary of the
   PLPMTUD approach and an example using multiple TCP
   connections.  Multiple connection testing SHOULD verify that TCP. [RFC4821] specifies a
   search_high and a search_low parameter for the
   network MTU.  As specified in
   [RFC4821], 1024 Bytes is configured properly a safe value for traffic shaping versus policing,
   various queuing implementations, search_low in modern
   networks.

   It is important to determine the links overhead along the IP path,
   and RED.

   Important then to select a TCP MSS size corresponding to note are some of the key characteristics Layer 3 MTU.
   For example, if the MTU is 1024 Bytes and
   considerations for the TCP test instrument.  The test host MAY TCP/IP headers are 40
   Bytes, then the MSS would be set to 984 Bytes.

   An example scenario is a
   standard computer or dedicated communications test instrument
   and these network where the actual path MTU is 1240
   Bytes.  The TCP test hosts client probe MUST be capable of emulating both a client setting the MSS for
   the probe packets and could start at MSS = 984 (which corresponds
   to an MTU size of 1024 Bytes).

   The TCP client probe would open a
   server.

   Whether TCP connection and advertise the
   MSS as 984.  Note that the client probe MUST generate these packets
   with the DF bit set. The TCP client probe then sends test host is a standard computer or traffic
   per a compliant TCP
   TTD, the following areas SHOULD small default Send Socket Buffer size of ~8KBytes.  It should
   be considered when selecting
   a test host:

   - TCP implementation used by kept small to minimize the possibility of congesting the network,
   which may induce packet loss.  The duration of the test host OS version, i.e. Linux OS
   kernel using TCP Reno, TCP options supported, etc.  This will
   obviously should also
   be more important when using custom test equipment where short (10-30 seconds), again to minimize congestive effects
   during the TCP implementation MAY be customized or tuned test.

   In the example of a 1240 Bytes path MTU, probing with an MSS equal to run in higher
   performance hardware.  When
   984 would yield a compliant TCP TTD is used, successful probe and the TCP
   implementation SHOULD test client packets would
   be identified in successfully transferred to the test results. The
   compliant TCP TTD SHOULD be usable for complete end-to-end testing
   through network security elements and SHOULD also be usable for
   testing network sections.

   - Most importantly, server.

   Also note that the TCP test host must be capable of generating client MUST verify that the MSS advertised
   is indeed negotiated.  Network devices with built-in Layer 4
   capabilities can intercede during the connection establishment and receiving stateful TCP
   reduce the advertised MSS to avoid fragmentation.  This is certainly
   a desirable feature from a network perspective, but it can yield
   erroneous test traffic at results if the full link speed of client test probe does not confirm the
   network under test. As a general rule of thumb, testing TCP
   throughput at rates greater than 100 Mbit/sec MAY require high
   performance server hardware or dedicated hardware based
   negotiated MSS.

   The next test tools.
   Thus, other devices cannot realize higher TCP throughput, probe would use the search_high value and user
   expectations SHOULD this would
   be set accordingly with user manual or notes on to MSS = 1460 to correspond to a 1500 Bytes MTU.  In this
   example, the results report.

   - Measuring RTT and TCP Efficiency per connection test client will generally
   require dedicated hardware retransmit based upon time-outs, since
   no ACKs will be received from the test tools. In server.  This test probe is
   marked as a conclusive failure if none of the absence test packets are
   ACK'ed.  If any of
   dedicated hardware based the test tools, these measurements MAY need to
   be conducted with packet capture tools (conduct TCP throughput tests
   and analyze RTT and retransmission results with packet captures).
   Another option MAY packets are ACK'ed, congestive network
   may be to use "TCP Extended Statistics MIB" per
   [RFC4898].

  - The compliant TCP TTD the cause and its access to the network under test MUST
    NOT introduce a performance bottleneck probe is not conclusive.  Re-testing
   at other times of any kind.

3.1. Determine Network Path MTU

   TCP implementations SHOULD use Path MTU Discovery techniques (PMTUD).
   PMTUD relies on ICMP 'need to frag' messages to learn the path MTU.
   When a device has a packet day is recommended to send which has the Don't Fragment (DF)
   bit in the IP header set and the packet further isolate.

   The test is larger than repeated until the Maximum
   Transmission Unit (MTU) desired granularity of the next hop link, the packet MTU is dropped
   and
   discovered.  The method can yield precise results at the device sends an ICMP 'need expense of
   probing time.  One approach may be to frag' message back reduce the probe size to
   half between the host
   that originated unsuccessful search_high and successful search_low
   value and raise it by half also when seeking the packet. The ICMP 'need upper limit.

3.2. Baseline Round Trip Time and Bandwidth

   Before stateful TCP testing can begin, it is important to frag' message includes determine
   the next hop MTU which PMTUD uses to tune baseline Round Trip Time (non-congested inherent delay) and
   bottleneck bandwidth of the TCP Maximum Segment
   Size (MSS). Unfortunately, because many end-to-end network managers completely
   disable ICMP, this technique does not always prove reliable in real
   world situations.

   Packetization Layer Path MTU Discovery or PLPMTUD [RFC4821] MUST
   be conducted to verify the minimum network path MTU.  PLPMTUD can be tested.  These
   measurements are used with or without ICMP. The following sections to provide a
   summary estimates of the PLPMTUD approach and an example using the ideal TCP
   protocol. [RFC4821] specifies a search_high Receive
   Window and a search_low
   parameter for the MTU.  As specified Send Socket Buffer sizes that SHOULD be used in [RFC4821], a value of 1024 is
   a generally safe value subsequent
   test steps.

3.2.1 Techniques to choose for search_low Measure Round Trip Time

   Following the definitions used in modern networks.

   It section 1.1, Round Trip Time (RTT)
   is important to determine the overhead of elapsed time between the links clocking in of the path,
   and then to select first bit of a TCP MSS size corresponding
   payload sent packet to the Layer 3 MTU.
   For example, if receipt of the MTU is 1024 bytes and last bit of the TCP/IP headers are 40
   bytes, then
   corresponding Acknowledgment.  Round Trip Delay (RTD) is used
   synonymously to twice the MSS would Link Latency.  RTT measurements SHOULD use
   techniques defined in [RFC2681] or statistics available from MIBs
   defined in [RFC4898].

   The RTT SHOULD be set baselined during "off-peak" hours to 984 bytes.

   An example scenario is obtain a
   reliable figure for inherent network where the actual path MTU is 1240
   bytes.  The TCP client probe MUST be capable latency versus additional delay
   caused by network buffering.  When sampling values of setting the MSS for
   the probe packets and could start at MSS = 984 (which corresponds
   to an MTU size of 1024 bytes).

   The TCP client probe would open RTT over a TCP connection and advertise test
   interval, the
   MSS minimum value measured SHOULD be used as 984.  Note that the client probe MUST generate these packets
   with baseline
   RTT since this will most closely estimate the DF bit set. The TCP client probe then sends test traffic
   per a nominal window size (8KB, etc.).  The window size SHOULD be
   kept small inherent network
   latency.  This inherent RTT is also used to minimize the possibility of congesting determine the network, Buffer
   Delay Percentage metric which MAY induce congestive loss. is defined in Section 3.3.2
   The duration following list is not meant to be exhaustive,  although it
   summarizes some of the test should
   also be short (10-30 seconds), again most common ways to minimize congestive effects
   during determine round trip time.
   The desired resolution of the test.

   In measurement (i.e. msec versus usec) may
   dictate whether the example of a 1240 byte path MTU, probing RTT measurement can be achieved with an MSS equal to
   984 would yield ICMP pings
   or by a successful probe and the dedicated communications test client packets would
   be successfully transferred instrument with precision
   timers.

   The objective in this section is to the list several techniques
   in order of decreasing accuracy.

   - Use test server.

   Also note that equipment on each end of the test client MUST verify that network, "looping" the MSS advertised
   is indeed negotiated.  Network devices with built-in Layer 4
   capabilities
   far-end tester so that a packet stream can intercede during the connection establishment
   process be measured back and reduce the advertised MSS to avoid fragmentation.  This
   is certainly a desirable feature forth
   from a network perspective, but
   can yield erroneous test results if the client end-to-end. This RTT measurement may be compatible with delay
   measurement protocols specified in [RFC5357].

   - Conduct packet captures of TCP test probe does not
   confirm the negotiated MSS.

   The next sessions using "iperf" or FTP,
   or other TCP test probe would use the search_high value and this would applications.   By running multiple experiments,
   packet captures can then be set to MSS = 1460 to correspond analyzed to a 1500 byte MTU.  In this
   example, the test client MUST retransmit estimate RTT based upon time-outs (since
   no ACKs will be received the
   SYN -> SYN-ACK from the test server).  This test probe is
   marked as a conclusive failure if none of 3 way handshake at the test packets are
   ACK'ed.  If any beginning of the test packets are ACK'ed, congestive network
   MAY TCP
   sessions. Although, note that Firewalls might slow down 3 way
   handshakes, so it might be the cause and the test probe is not conclusive.  Re-testing
   at other times of the day is RECOMMENDED useful to further isolate.

   The test is repeated until the desired granularity of the MTU is
   discovered.  The method can yield precise results at compare with measured RTT later
   on in the expense of
   probing time.  One approach MAY same capture.

   - ICMP Pings may also be adequate to reduce provide round trip time
   estimations.  Some limitations with ICMP Ping may include msec
   resolution and whether the probe size network elements are responding to
   half between the unsuccessful search_high pings
   or not.  Also, ICMP is often rate-limited and successful search_low
   value, and increase by increments of 1/2 when seeking the upper
   limit.

3.2. Baseline Round Trip Time segregated into
   different buffer queues, so it is not as reliable and accurate as
   in-band measurements.

3.2.2 Techniques to Measure end-to-end Bandwidth

   Before stateful TCP testing can begin, it is important

   There are many well established techniques available to determine
   the baseline Round Trip Time (non-congested inherent delay) and
   bottleneck bandwidth provide
   estimated measures of the end-end network to be tested. bandwidth over a network.  These measurements are used to provide estimates
   SHOULD be conducted in both directions of the ideal TCP window
   size, network, especially for
   access networks, which SHOULD may be used asymmetrical.  Measurements SHOULD use
   network capacity techniques defined in subsequent [RFC5136].

   Before any TCP Throughput test steps.  These latency
   and can be done, a bandwidth tests SHOULD measurement
   test MUST be run during with stateless IP streams(not stateful TCP) in order
   to determine the time of available bandwidths in each direction.  This test
   should obviously be performed at various intervals throughout a
   business day for which or even across a week.  Ideally, the bandwidth test
   should produce logged outputs of the achieved bandwidths across the
   test interval.

3.3. TCP Throughput Tests

   This methodology specifically defines TCP throughput tests will occur.

   3.2.1 Techniques techniques to Measure Round Trip Time

   Following the definitions used
   verify sustained TCP performance in a managed business IP network, as
   defined in the section 1.1; Round Trip Time
   (RTT) is 2.1. This section and others will define the time elapsed between
   method to conduct these sustained TCP throughput tests and guidelines
   for the clocking in predicted results.

   With baseline measurements of round trip time and bandwidth
   from section 3.2, a series of single and multiple TCP connection
   throughput tests SHOULD be conducted to baseline network performance
   against expectations.  The number of trials and the first bit type of a payload packet testing
   (single versus multiple connections) will vary according to the receipt
   intention of the last bit test.  One example would be a single connection test
   in which the throughput achieved by large Send Socket Buffer and TCP
   Receive Window sizes (i.e. 256KB) is to be measured. It would be
   advisable to test performance at various times of the
   corresponding Acknowledgment.  Round Trip Delay (RTD) business day.

   It is used
   synonymously RECOMMENDED to twice run the Link Latency.  RTT measurements SHOULD use
   techniques defined tests in [RFC2681] or statistics available from MIBs each direction independently
   first, then run both directions simultaneously.  In each case,
   TCP Transfer Time, TCP Efficiency, and Buffer Delay Percentage MUST
   be measured in each direction.  These metrics are defined in [RFC4898]. 3.3.2.

3.3.1 Calculate Ideal TCP Receive Window Size

   The RTT SHOULD ideal TCP Receive Window size can be baselined during "off-peak" hours to obtain a
   reliable figure for inherent network latency versus additional calculated from the
   bandwidth delay
   caused by network buffering delays.

   During product (BDP), which is:

   BDP (bits) = RTT (sec) x Bandwidth (bps)

   Note that the actual sustained TCP throughput tests, RTT MUST be
   measured along with is being used as the "Delay" variable in the
   BDP calculations.

   Then, by dividing the BDP by 8, we obtain the "ideal" TCP throughput. Receive
   Window size in Bytes.  For optimal results, the Send Socket Buffer delay effects can
   size must be
   isolated if RTT is concurrently measured.

   This is not meant adjusted to provide an exhaustive list, but summarizes some
   of the more common ways to determine round trip time (RTT) through same value at the network. The desired resolution opposite end of the measurement (i.e. msec
   versus usec) may dictate whether the RTT measurement can be achieved
   with standard tools such as ICMP ping techniques or whether
   specialized test equipment
   network path.

   Ideal TCP RWIN = BDP / 8

   An example would be required a T3 link with high precision
   timers. 25 msec RTT.  The BDP would equal
   ~1,105,000 bits and the ideal TCP Receive Window would be ~138
   KBytes.

   The following table provides some representative network Link Speeds,
   RTT, BDP, and their associated Ideal TCP Receive Window sizes.

   Table 3.3.1: Link Speed, RTT and calculated BDP & TCP Receive Window

   Link                                               Ideal TCP
   Speed*           RTT               BDP             Receive Window
   (Mbps)           (ms)             (bits)            (KBytes)
   ---------------------------------------------------------------------
    1.536            20              30,720              3.84
    1.536            50              76,800              9.60
    1.536           100             153,600             19.20
    44.21            10             442,100             55.26
    44.21            15             663,150             82.89
    44.21            25           1,105,250            138.16
    100               1             100,000             12.50
    100               2             200,000             25.00
    100               5             500,000             62.50
    1,000           0.1             100,000             12.50
    1,000           0.5             500,000             62.50
    1,000             1           1,000,000            125.00
    10,000          0.05            500,000             62.50
    10,000          0.3           3,000,000            375.00

   * Note that link speed is the bottleneck bandwidth for the NUT

   The following serial link speeds are used:
   - T1 = 1.536 Mbits/sec (for a B8ZS line encoding facility)
   - T3 = 44.21 Mbits/sec (for a C-Bit Framing facility)

   The above table illustrates the ideal TCP Receive Window size.
   If a smaller TCP Receive Window is used, then the TCP Throughput
   is not optimal. To calculate the Ideal TCP Throughput, the following
   formula is used: TCP Throughput = TCP RWIN X 8 / RTT

   An example could be a 100 Mbps IP path with 5 ms RTT and a TCP
   Receive Window size of 16KB, then:

   TCP Throughput = 16 KBytes X 8 bits / 5 ms.
   TCP Throughput = 128,000 bits / 0.005 sec.
   TCP Throughput = 25.6 Mbps.

   Another example for a T3 using the same calculation formula is
   illustrated on the next page:
   TCP Throughput = TCP RWIN X 8 / RTT.
   TCP Throughput = 16 KBytes X 8 bits / 10 ms.
   TCP Throughput = 128,000 bits / 0.01 sec.
   TCP Throughput = 12.8 Mbps.

   When the TCP Receive Window size exceeds the BDP (i.e. T3 link,
   64 KBytes TCP Receive Window on a 10 ms RTT path), the maximum frames
   per second limit of 3664 is reached and the calculation formula is:

   TCP Throughput = Max FPS X MSS X 8.
   TCP Throughput = 3664 FPS X 1460 Bytes X 8 bits.
   TCP Throughput = 42.8 Mbps
   The following diagram compares achievable TCP throughputs on a T3
   with Send Socket Buffer & TCP Receive Window sizes of 16KB vs. 64KB.

           45|
             |           _______42.8M
           40|           |64KB |
TCP          |           |     |
Throughput 35|           |     |
in Mbps      |           |     |          +-----+34.1M
           30|           |     |          |64KB |
             |           |     |          |     |
           25|           |     |          |     |
             |           |     |          |     |
           20|           |     |          |     |          _______20.5M
             |           |     |          |     |          |64KB |
           15|           |     |          |     |          |     |
             |12.8M+-----|     |          |     |          |     |
           10|     |16KB |     |          |     |          |     |
             |     |     |     |8.5M+-----|     |          |     |
            5|     |     |     |    |16KB |     |5.1M+-----|     |
             |_____|_____|_____|____|_____|_____|____|16KB |_____|_____
                        10               15               25
                                RTT in milliseconds

   The objective following diagram shows the achievable TCP throughput on a 25ms
   T3 when Send Socket Buffer & TCP Receive Window sizes are increased.

           45|
             |
           40|                                             +-----+40.9M
TCP          |                                             |     |
Throughput 35|                                             |     |
in this section Mbps      |                                             |     |
           30|                                             |     |
             |                                             |     |
           25|                                             |     |
             |                                             |     |
           20|                               +-----+20.5M  |     |
             |                               |     |       |     |
           15|                               |     |       |     |
             |                               |     |       |     |
           10|                  +-----+10.2M |     |       |     |
             |                  |     |      |     |       |     |
            5|     +-----+5.1M  |     |      |     |       |     |
             |_____|_____|______|_____|______|_____|_______|_____|_____
                     16           32           64            128*
                          TCP Receive Window size in KBytes

   * Note that 128KB requires [RFC1323] TCP Window scaling option.

3.3.2 Metrics for TCP Throughput Tests

   This framework focuses on a TCP throughput methodology and also
   provides several basic metrics to compare results of various
   throughput tests.  It is recognized that the complexity and
   unpredictability of TCP makes it impossible to develop a complete
   set of metrics that accounts for the myriad of variables (i.e. RTT
   variation, loss conditions, TCP implementation, etc.).  However,
   these basic metrics will facilitate TCP throughput comparisons
   under varying network conditions and between network traffic
   management techniques.

   The first metric is the TCP Transfer Time, which is simply the
   measured time it takes to transfer a block of data across
   simultaneous TCP connections.  This concept is to list several useful when
   benchmarking traffic management techniques
   in order of decreasing accuracy.

   - Use test equipment on each end and where multiple
   TCP connections are required.

   TCP Transfer time may also be used to provide a normalized ratio of
   the network, "looping" actual TCP Transfer Time versus the
   far-end tester so that a packet stream can be measured end-end. Ideal Transfer Time.  This
   test equipment RTT measurement MAY be compatible
   ratio is called the TCP Transfer Index and is defined as:

                     Actual TCP Transfer Time
                    -------------------------
                     Ideal TCP Transfer Time

   The Ideal TCP Transfer time is derived from the network path
   bottleneck bandwidth and various Layer 1/2/3/4 overheads associated
   with the network path.  Additionally, both the TCP Receive Window and
   the Send Socket Buffer sizes must be tuned to equal the bandwidth
   delay
   measurement protocols specified product (BDP) as described in [RFC5357].

   - Conduct packet captures section 3.3.1.

   The following table illustrates the Ideal TCP Transfer time of a
   single TCP test applications using for example
  "iperf" or FTP, etc.  By running multiple experiments, the packet
   captures can be studied connection when its TCP Receive Window and Send Socket
   Buffer sizes are equal to estimate the BDP.

   Table 3.3.2: Link Speed, RTT, BDP, TCP Throughput, and
                Ideal TCP Transfer time for a 100 MB File

    Link                             Maximum             Ideal TCP
    Speed                   BDP      Achievable TCP      Transfer time
    (Mbps)     RTT (ms)   (KBytes)   Throughput(Mbps)    (seconds)
   --------------------------------------------------------------------
    1.536        50          9.6          1.4                571
    44.21        25        138.2         42.8                 18
    100           2         25.0         94.9                  9
    1,000         1        125.0        949.2                  1
    10,000      0.05        62.5        9,492                0.1

    Transfer times are rounded for simplicity.

   For a 100MB file(100 x 8 = 800 Mbits), the Ideal TCP Transfer Time
   is derived as follows:

                                           800 Mbits
       Ideal TCP Transfer Time = -----------------------------------
                                  Maximum Achievable TCP Throughput

   The maximum achievable layer 2 throughput on T1 and T3 Interfaces
   is based upon on the SYN -> SYN-ACK
   handshakes within maximum frames per second (FPS) permitted by the TCP connection set-up.

  - ICMP Pings MAY also be adequate to provide round trip time
   estimations.  Some limitations of ICMP Ping MAY include msec
   resolution and whether
   actual layer 1 speed when the network elements respond to pings (or
   block them).

   3.2.2 Techniques to Measure End-end Bandwidth

   There are many well established techniques available to provide
   estimated measures of bandwidth over MTU is 1500 Bytes.

   The maximum FPS for a network.  This measurement
   SHOULD be conducted in both directions of T1 is 127 and the network, especially calculation formula is:
   FPS = T1 Link Speed / ((MTU + PPP + Flags + CRC16) X 8)
   FPS = (1.536M /((1500 Bytes + 4 Bytes + 2 Bytes + 2 Bytes) X 8 )))
   FPS = (1.536M / (1508 Bytes X 8))
   FPS = 1.536 Mbps / 12064 bits
   FPS = 127

   The maximum FPS for
   access networks which MAY be asymmetrical. Measurements SHOULD use
   network capacity techniques defined in [RFC5136]. a T3 is 3664 and the calculation formula is:
   FPS = T3 Link Speed / ((MTU + PPP + Flags + CRC16) X 8)
   FPS = (44.21M /((1500 Bytes + 4 Bytes + 2 Bytes + 2 Bytes) X 8 )))
   FPS = (44.21M / (1508 Bytes X 8))
   FPS = 44.21 Mbps / 12064 bits
   FPS = 3664

   The bandwidth measurement test MUST be run with stateless IP streams
   (not stateful TCP) in order 1508 equates to:

     MTU + PPP + Flags + CRC16

   Where MTU is 1500 Bytes, PPP is 4 Bytes, Flags are 2 Bytes and CRC16
   is 2 Bytes.

   Then, to determine obtain the available bandwidth Maximum Achievable TCP Throughput (layer 4), we
   simply use: MSS in
   each direction.  And this test SHOULD obviously be performed at
   various intervals throughout a business day (or even across a week).
   Ideally, the bandwidth test SHOULD produce Bytes X 8 bits X max FPS.
   For a log output of the
   bandwidth achieved across T3, the test interval.

3.3. maximum TCP Throughput Tests

   This methodology specifically defines = 1460 Bytes X 8 bits X 3664 FPS
   Maximum TCP throughput techniques to
   verify sustained Throughput = 11680 bits X 3664 FPS
   Maximum TCP performance in a managed business network.
   Defined in section 2.1, the equilibrium Throughput = 42.8 Mbps.

   The maximum achievable layer 2 throughput reflects on Ethernet Interfaces is
   based on the maximum rate achieved frames per second permitted by a TCP connection within the congestion
   avoidance phase on an end-end network path.  This section and others
   will define IEEE802.3
   standard when the method to conduct these sustained throughput tests MTU is 1500 Bytes.

   The maximum FPS for 100M Ethernet is 8127 and guidelines of the predicted results.

   With baseline measurements of round trip time and bandwidth
   from section 3.2, a series of single and multiple TCP connection
   throughput tests can be conducted to baseline network performance
   against expectations.

   It calculation is:
   FPS = (100Mbps /(1538 Bytes X 8 bits))

   The maximum FPS for GigE is RECOMMENDED to run the tests in each direction independently
   first, then run both directions simultaneously.  In each case, the
   TCP Transfer Time, TCP Efficiency, 81274 and Buffer Delay metrics MUST be
   measured in each direction.

3.3.1 Calculate Ideal TCP Window Size

   The ideal TCP window size can be calculated from the bandwidth
   delay product (BDP), which calculation formula is:

   BDP (bits)
   FPS = RTT (sec) x Bandwidth (bps)

   By dividing the BDP by 8, (1Gbps /(1538 Bytes X 8 bits))

   The maximum FPS for 10GigE is 812743 and the "ideal" TCP window size calculation formula is:
   FPS = (10Gbps /(1538 Bytes X 8 bits))
   The 1538 equates to:

     MTU + Eth + CRC32 + IFG + Preamble + SFD

   Where MTU is calculated.
   An example would be a T3 link 1500 Bytes, Ethernet is 14 Bytes, CRC32 is 4 Bytes,
   IFG is 12 Bytes, Preamble is 7 Bytes and SFD is 1 Byte.

   Note that better results could be obtained with 25 msec RTT.  The BDP would equal
   ~1,105,000 bits jumbo frames on
   GigE and 10 GigE.

   Then, to obtain the ideal TCP window would equal ~138,000 bytes.

   The following table provides some representative network link speeds,
   latency, BDP, and associated ideal TCP window size.  Sustained Maximum Achievable TCP transfers SHOULD reach nearly 100% throughput, minus the overhead
   of Layers 1-3 and the divisor of the Throughput (layer 4), we
   simply use: MSS into the TCP Window. in Bytes X 8 bits X max FPS.
   For this single connection baseline test, the MSS size will effect
   the achieved throughput (especially for smaller TCP Window sizes).
   Table 3.2 provides a 100M, the achievable, equilibrium maximum TCP throughput (at
   Layer 4) using Throughput = 1460 byte MSS.  Also in this table, the 58 byte L1-L4
   overhead including the Ethernet CRC32 is used for simplicity.

   Table 3.3: Link Speed, RTT and calculated BDP, B X 8 bits X 8127 FPS
   Maximum TCP Throughput

   Link                               Ideal TCP = 11680 bits X 8127 FPS
   Maximum Achievable
   Speed*    RTT (ms)  BDP (bits)  Window (kBytes) TCP Throughput(Mbps)
   ---------------------------------------------------------------------
    T1         20        30,720          3.84              1.17
    T1         50        76,800          9.60              1.40
    T1        100       153,600         19.20              1.40
    T3         10       442,100         55.26             42.05
    T3         15       663,150         82.89             42.05
    T3         25     1,105,250        138.16             41.52
    T3(ATM)    10       407,040         50.88             36.50
    T3(ATM)    15       610,560         76.32             36.23
    T3(ATM)    25     1,017,600        127.20             36.27
    100M        1       100,000         12.50             91.98
    100M        2       200,000         25.00             93.44
    100M        5       500,000         62.50             93.44
    1Gig      0.1       100,000         12.50            919.82
    1Gig      0.5       500,000         62.50            934.47
    1Gig        1     1,000,000        125.00            934.47
    10Gig     0.05      500,000         62.50          9,344.67
    10Gig     0.3     3,000,000        375.00          9,344.67

   * Note that link speed is Throughput = 94.9 Mbps.

   To illustrate the bottleneck bandwidth for TCP Transfer Time Index, an example would be the NUT
   Also,
   bulk transfer of 100 MB over 5 simultaneous TCP connections  (each
   connection uploading 100 MB).  In this example, the following link speeds (available payload bandwidth) were
   used for Ethernet service
   provides a Committed Access Rate (CAR) of 500 Mbit/s.  Each
   connection may achieve different throughputs during a test and the WAN entries:

   - T1 = 1.536 Mbits/sec (B8ZS line encoding facility)
   - T3 = 44.21 Mbits/sec (C-Bit Framing)
   - T3(ATM) = 36.86 Mbits/sec (C-Bit Framing & PLCP, 96000 Cells per
     second)
   overall throughput rate is not always easy to determine (especially
   as the number of connections increases).

   The calculation method used ideal TCP Transfer Time would be ~8 seconds, but in this document is a 3 step process :

   1 - Determine what SHOULD example,
   the actual TCP Transfer Time was 12 seconds.  The TCP Transfer Index
   would then be 12/8 = 1.5, which indicates that the optimal transfer across
   all connections took 1.5 times longer than the ideal.

   The second metric is TCP Window size value
       based on Efficiency, which is the optimal quantity percentage of "in-flight" octets discovered by
       the BDP calculation. We take into consideration Bytes
   that were not retransmitted and is defined as:

                Transmitted Bytes - Retransmitted Bytes
                ---------------------------------------  x 100
                          Transmitted Bytes

   Transmitted Bytes are the total number of TCP
       Window size has payload Bytes to be
   transmitted which includes the original and retransmitted Bytes. This
   metric provides a comparative measure between various QoS mechanisms
   like traffic management or congestion avoidance.  Various TCP
   implementations like Reno, Vegas, etc. could also be compared.

   As an exact multiple value of example, if 100,000 Bytes were sent and 2,000 had to be
   retransmitted, the MSS.
   2 TCP Efficiency should be calculated as:

                   102,000 - Calculate 2,000
                   ----------------  x 100 = 98.03%
                        102,000

   Note that the achievable layer 2 throughput by multiplying retransmitted Bytes may have occurred more than once,
   and these multiple retransmissions are added to the
       value determined Retransmitted
   Bytes count (and the Transmitted Bytes count).

   The third metric is the Buffer Delay Percentage, which represents the
   increase in step 1 RTT during a TCP throughput test with the MSS & (MSS + L2 + L3 + L4
       Overheads) divided by the respect to
   inherent or baseline network RTT.
   3 - Finally, multiply The baseline RTT is the calculated value of step 2 by round-trip
   time inherent to the MSS
       versus (MSS + L2 + L3 + L4 Overheads) ratio.

   This provides network path under non-congested conditions.
   (See 3.2.1 for details concerning the achievable TCP Throughput value.  Sometimes, baseline RTT measurements).

   The Buffer Delay Percentage is defined as:

              Average RTT during Transfer - Baseline RTT
              ------------------------------------------ x 100
                             Baseline RTT

   As an example, the
   maximum achievable throughput baseline RTT for the network path is limited by 25 msec.
   During the maximum achievable
   quantity course of Ethernet Frames per second on a TCP transfer, the average RTT across the physical media. Then
   entire transfer increased to 32 msec.  In this value is used in step 2 instead of example, the Buffer
   Delay Percentage would be calculated one.

  The following diagram compares achievable TCP throughputs on a T3 link
  with Windows 2000/XP TCP window sizes of 16KB versus 64KB.

           45|
             |          _____42.1M
           40|          |64K|
TCP          |          |   |
Throughput 35|          |   |           _____34.3M
in Mbps      |          |   |           |64K|
           30|          |   |           |   |
             |          |   |           |   |
           25|          |   |           |   |
             |          |   |           |   |
           20|          |   |           |   |           _____20.5M
             |          |   |           |   |           |64K|
           15| 14.5M____|   |           |   |           |   |
             |      |16K|   |           |   |           |   |
           10|      |   |   |   9.6M+---+   |           |   |
             |      |   |   |       |16K|   |   5.8M____+   |
            5|      |   |   |       |   |   |       |16K|   |
             |______+___+___+_______+___+___+_______+__ +___+_______
                        10              15 as:

                          32 - 25
                                RTT in milliseconds

   The following diagram shows
                          ------- x 100 = 28%
                             25

   Note that the achievable TCP throughput on a 25ms
   T3 when the Transfer Time, TCP Window size is increased Efficiency, and with the [RFC1323] Buffer Delay
   Percentage MUST be measured during each throughput test. Poor TCP
   Window scaling option.

           45|
             |                                             +-----+42.47M
           40|                                             |     |
   Transfer Time Indexes (TCP Transfer Time greater than Ideal TCP          |                                             |     |
Throughput 35|                                             |     |
in Mbps      |                                             |     |
           30|                                             |     |
             |                                             |     |
           25|                                             |     |
             |                               ______ 21.23M |     |
           20|                               |    |        |     |
             |                               |    |        |     |
           15|                               |    |        |     |
             |                               |    |        |     |
           10|               +----+10.62M    |    |        |     |
             |  _______5.31M |    |          |    |        |     |
            5|  |     |      |    |          |    |        |     |
             |__+_____+______+____+__________+____+________+_____+___
                   16           32           64              128
   Transfer Times) may be diagnosed by correlating with sub-optimal TCP Window size in KBytes

3.3.2
   Efficiency and/or Buffer Delay Percentage metrics.

3.3.3 Conducting the TCP Throughput Tests

   There are several

   Several TCP tools that are commonly currently used in the network world and one of
   the most common is the "iperf" tool. "iperf". With this tool, hosts are installed at
   each end of the network segment; path; one acts as client and the other as
   a server.  The Send Socket Buffer and the TCP Receive Window size sizes
   of both the client and
   the server can be manually set and the set.  The achieved
   throughput is can then be measured, either uni-directionally or
   bi-directionally.  For higher BDP situations in lossy networks
   (long fat networks or satellite links, etc.), TCP options such as
   Selective Acknowledgment SHOULD be considered and also become part of
   the window size / throughput characterization.

   Host hardware performance MUST must be well understood before conducting
   the TCP throughput tests and other tests described in the following sections.
   Dedicated  A dedicated
   communications test equipment instrument will generally be REQUIRED, required, especially
   for line rates of GigE and 10 GigE.  A compliant TCP TTD SHOULD
   provide a warning message when the expected test throughput will
   exceed 10% of the network bandwidth capacity.  If the throughput test
   is expected to exceed 10% of the provider bandwidth, then the test SHOULD
   should be coordinated with the network provider.  This does not
   include the customer premise bandwidth, the 10% refers directly to
   the provider's bandwidth (Provider Edge to Provider router).

   The TCP throughput test SHOULD should be run over a long enough duration
   to properly exercise network buffers (greater than 30 seconds) and
   also characterize performance during at different time periods of the day.

   Note that both the TCP Transfer Time, TCP Efficiency, and Buffer
   Delay metrics MUST be measured during each throughput test.
   Poor TCP Transfer Time Indexes (TCP Transfer Time greater than Ideal
   TCP Transfer Times) MAY be diagnosed by correlating with sub-optimal
   TCP Efficiency and/or Buffer Delay Percentage metrics.

3.3.3

3.3.4 Single vs. Multiple TCP Connection Testing

   The decision whether to conduct single or multiple TCP connection
   tests depends upon the size of the BDP in relation to the window configured
   TCP Receive Window sizes configured in the end-user environment.
   For example, if the BDP for a long-fat pipe long fat network turns out to be 2MB,
   then it is probably more realistic to test this pipe network path with
   multiple connections.  Assuming typical host computer window settings TCP Receive
   Window Sizes of 64 KB, using 32 TCP connections would realistically
   test this pipe. path.

   The following table is provided to illustrate the relationship of
   between the
   BDP, window size, TCP Receive Window size and the number of TCP connections
   required to utilize the available capacity. capacity of a given BDP. For this
   example, the network bandwidth is 500 Mbps, Mbps and the RTT is equal to 5 ms, and then
   the BDP equates to 312 312.5 KBytes.

              #Connections

      TCP        Number of TCP Connections
      Window     to Fill Link
   ------------------------ fill available bandwidth
     -------------------------------------
       16KB             20
       32KB             10
       64KB              5
      128KB              3

   The TCP Transfer Time metric is useful for conducting multiple
   connection tests.  Each connection SHOULD should be configured to transfer
   payloads of the same size (i.e. 100 MB), and the TCP Transfer time
   SHOULD
   should provide a simple metric to verify the actual versus expected
   results.

   Note that the TCP transfer time is the time for all connections to
   complete the transfer of the configured payload size.  From the
   example table listed above,
   previous table, the 64KB window is considered.  Each of the 5
   TCP connections would be configured to transfer 100MB, and each
   TCP one
   should obtain a maximum of 100 Mb/sec per connection. Mb/sec.  So for this example, the
   100MB payload should be transferred across the connections in
   approximately 8 seconds (which would be the ideal TCP transfer time for
   under these conditions).

   Additionally, the TCP Efficiency metric SHOULD MUST be computed for each
   connection tested (defined as defined in section 2.2).

3.3.4 3.3.2.

3.3.5 Interpretation of the TCP Throughput Results

   At the end of this step, the user will document the theoretical BDP
   and a set of Window size experiments with measured TCP throughput for
   each TCP window size setting. size.  For cases where the sustained TCP throughput
   does not equal the ideal value, some possible causes
   are listed: are:

   - Network congestion causing packet loss which MAY be inferred from
     a poor TCP Efficiency metric (100% % (higher TCP Efficiency % = no less packet
     loss)
   - Network congestion causing an increase in RTT which MAY be inferred
     from the Buffer Delay metric (0% Percentage (i.e., 0% = no increase in RTT
     over baseline)
   - Intermediate network devices which actively regenerate the TCP
     connection and can alter window TCP Receive Window size, MSS, etc.
   - Rate limiting (policing).  More discussion of details on traffic management
     tests follows in section 3.4

3.4. Traffic Management Tests

   In most cases, the network connection between two geographic
   locations (branch offices, etc.) is lower than the network connection
   of the
   to host computers.  An example would be LAN connectivity of GigE
   and WAN connectivity of 100 Mbps.  The WAN connectivity may be
   physically 100 Mbps or logically 100 Mbps (over a GigE WAN
   connection). In the later case, rate limiting is used to provide the
   WAN bandwidth per the SLA.

   Traffic management techniques are employed to provide various forms
   of QoS, the more common include:

   - Traffic Shaping
   - Priority queuing
   - Random Early Discard (RED, etc.) (RED)

   Configuring the end-end end-to-end network with these various traffic
   management mechanisms is a complex under-taking. For traffic shaping
   and RED techniques, the end goal is to provide better performance for to
   bursty traffic such as TCP (RED TCP,(RED is specifically intended for TCP).

   This section of the methodology provides guidelines to test traffic
   shaping and RED implementations.  As in section 3.3, host hardware
   performance MUST must be well understood before conducting the traffic
   shaping and RED tests. Dedicated communications test equipment instrument will
   generally be REQUIRED for line rates of GigE and 10 GigE.  If the
   throughput test is expected to exceed 10% of the provider bandwidth,
   then the test
   SHOULD should be coordinated with the network provider.  This
   does not include the customer premise premises bandwidth, the 10% refers directly to
   the provider's bandwidth (Provider Edge to Provider router). Note
   that GigE and 10 GigE interfaces might benefit from hold-queue
   adjustments in order to prevent the saw-tooth TCP traffic pattern.

3.4.1 Traffic Shaping Tests

   For services where the available bandwidth is rate limited, there are two (2)
   techniques used to implement rate limiting: can be used: traffic policing
   and or traffic shaping.

   Simply stated, traffic policing marks and/or drops packets which
   exceed the SLA bandwidth (in most cases, excess traffic is dropped).
   Traffic shaping employs the use of queues to smooth the bursty
   traffic and then send out within the SLA bandwidth limit (without
   dropping packets unless the traffic shaping queue is exceeded). exhausted).

   Traffic shaping is generally configured for TCP data services and
   can provide improved TCP performance since the retransmissions are
   reduced, which in turn optimizes TCP throughput for the given available
   bandwidth.  Through this section, the available rate-limited bandwidth shall
   be referred to as the "bottleneck bandwidth".

   The ability to detect proper traffic shaping is more easily diagnosed
   when conducting a multiple TCP connection connections test.  Proper shaping will
   provide a fair distribution of the available bottleneck bandwidth,
   while traffic policing will not.

   The traffic shaping tests are built upon the concepts of multiple
   connection
   connections testing as defined in section 3.3.3.  Calculating the BDP
   for the bottleneck bandwidth is first REQUIRED required before selecting the
   number of connections and Send Buffer and TCP Receive Window size sizes
   per connection.

   Similar to the example in section 3.3, a typical test scenario might
   be:  GigE LAN with a 100Mbps bottleneck bandwidth (rate limited
   logical interface), and 5 msec RTT.  This would require five (5) TCP
   connections of 64 KB window size Send Socket Buffer and TCP Receive Window sizes
   to evenly fill the bottleneck bandwidth
   (about 100 (~100 Mbps per connection).

   The traffic shaping test SHOULD should be run over a long enough duration to
   properly exercise network buffers (greater than 30 seconds) and also
   characterize performance during different time periods of the day.
   The throughput of each connection MUST be logged during the entire
   test, along with the TCP Transfer Time, TCP Efficiency, and
   Buffer Delay metrics. Percentage.

3.4.1.1 Interpretation of Traffic Shaping Test Results

   By plotting the throughput achieved by each TCP connection, the fair
   sharing of the bandwidth is generally very obvious when traffic
   shaping is properly configured for the bottleneck interface.  For the
   previous example of 5 connections sharing 500 Mbps, each connection
   would consume ~100 Mbps with a smooth variation.

   If traffic policing was present on the bottleneck interface, the
   bandwidth sharing MAY may not be fair and the resulting throughput plot MAY
   may reveal "spikey" throughput consumption of the competing TCP
   connections (due to the TCP retransmissions).

3.4.2 RED Tests

   Random Early Discard techniques are specifically targeted to provide
   congestion avoidance for TCP traffic.  Before the network element
   queue "fills" and enters the tail drop state, RED drops packets at
   configurable queue depth thresholds.  This action causes TCP
   connections to back-off which helps to prevent tail drop, which in
   turn helps to prevent global TCP synchronization.

   Again, rate limited interfaces can may benefit greatly from RED based
   techniques.  Without RED, TCP is generally may not be able to achieve the full bandwidth of the
   bottleneck interface. bandwidth.  With RED enabled, TCP congestion avoidance
   throttles the connections on the higher speed interface (i.e. LAN)
   and can reach equilibrium with help achieve the full bottleneck
   bandwidth (achieving closer bandwidth.  The burstiness
   of TCP traffic is a key factor in the overall effectiveness of RED
   techniques; steady state bulk transfer flows will generally not
   benefit from RED.  With bulk transfer flows, network device queues
   gracefully throttle the effective throughput rates due to full throughput). increased
   delays.

   The ability to detect proper RED configuration is more easily
   diagnosed when conducting a multiple TCP connection connections test.  Multiple
   TCP connections provide the multiple bursty sources that emulate the
   real-world conditions for which RED was intended.

   The RED tests also build builds upon the concepts of multiple connection connections
   testing as defined in section 3.3.3.  Calculating the BDP for the
   bottleneck bandwidth is first REQUIRED required before selecting the number
   of connections connections, the Send Socket Buffer size and the TCP Receive
   Window size per connection.

   For RED testing, the desired effect is to cause the TCP connections
   to burst beyond the bottleneck bandwidth so that queue drops will
   occur.  Using the same example from section 3.4.1 (traffic shaping),
   the 500 Mbps bottleneck bandwidth requires 5 TCP connections (with
   window size of 64Kb) 64KB) to fill the capacity.  Some experimentation is
   REQUIRED,
   required, but it is RECOMMENDED recommended to start with double the number of
   connections to stress the network element buffers / queues.  In this
   example, 10 queues (10
   connections SHOULD produce TCP bursts of 64KB for each
   connection.  If the timing of the this example).

   The TCP tester permits, TTD must be configured to generate these connections as
   shorter (bursty) flows versus bulk transfer type flows.  These TCP
   bursts SHOULD should stress queue sizes in the 512KB range.  Again
   experimentation will be REQUIRED and required; the proper number of TCP
   connections
   connections, the Send Socket Buffer and TCP window size Receive Window sizes will
   be dictated by the size of the network element queue.

3.4.2.1 Interpretation of RED Results

   The default queuing technique for most network devices is FIFO based.
   Without RED, the FIFO based queue will may cause excessive loss to all of
   the TCP connections and in the worst case global TCP synchronization.

   By plotting the aggregate throughput achieved on the bottleneck
   interface, proper RED operation MAY may be determined if the bottleneck
   bandwidth is fully utilized.  For the previous example of 10
   connections (window = 64 KB) sharing 500 Mbps, each connection SHOULD should
   consume ~50 Mbps.  If RED was not properly enabled on the interface,
   then the TCP connections will retransmit at a higher rate and the
   net effect is that the bottleneck bandwidth is not fully utilized.

   Another means to study non-RED versus RED implementation is to use
   the TCP Transfer Time metric for all of the connections.  In this
   example, a 100 MB payload transfer SHOULD should take ideally 16 seconds
   across all 10 connections (with RED enabled).  With RED not enabled,
   the throughput across the bottleneck bandwidth MAY may be greatly
   reduced (generally 20-40%) 10-20%) and the actual TCP Transfer time MAY may be
   proportionally longer then the Ideal TCP Transfer time.

   Additionally, the TCP Transfer Efficiency metric is useful, since non-RED implementations MAY may exhibit a lower TCP
   Transfer Efficiency.

4. Security Considerations

   The security considerations that apply to any active measurement of
   live networks are relevant here as well.  See [RFC4656] and
   [RFC5357].

5. IANA Considerations

   This document does not REQUIRE an IANA registration for ports
   dedicated to the TCP testing described in this document.

6. Acknowledgments

   Thanks to Lars Eggert, Al Morton, Matt Mathis, Matt Zekauskas, Al Morton, Rudi Geib, and
   Yaakov Stein, and Loki Jorgenson for many good comments and for
   pointing us to great sources of information pertaining to past works
   in the TCP capacity area.

7. References

7.1 Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC4656]  Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
              Zekauskas, "A One-way Active Measurement Protocol
              (OWAMP)", RFC 4656, September 2006.

   [RFC5681]  Allman, M., Paxson, V., Stevens W., "TCP Congestion
              Control", RFC 5681, September 2009.

   [RFC2544]  Bradner, S., McQuaid, J., "Benchmarking Methodology for
              Network Interconnect Devices", RFC 2544, June 1999

   [RFC5357]  Hedayat, K., Krzanowski, R., Morton, A., Yum, K., Babiarz,
              J., "A Two-Way Active Measurement Protocol (TWAMP)",
              RFC 5357, October 2008

   [RFC4821]  Mathis, M., Heffner, J., "Packetization Layer Path MTU
              Discovery", RFC 4821, June 2007

              draft-ietf-ippm-btc-cap-00.txt Allman, M., "A Bulk
              Transfer Capacity Methodology for Cooperating Hosts",
              August 2001

   [RFC2681]  Almes G., Kalidindi S., Zekauskas, M., "A Round-trip Delay
              Metric for IPPM", RFC 2681, September, 1999

   [RFC4898]  Mathis, M., Heffner, J., Raghunarayan, R., "TCP Extended
              Statistics MIB", May 2007

   [RFC5136]  Chimento P., Ishac, J., "Defining Network Capacity",
              February 2008

   [RFC1323]  Jacobson, V., Braden, R., Borman D., "TCP Extensions for
              High Performance", May 1992

7.2. Informative References
Authors' Addresses

   Barry Constantine
   JDSU, Test and Measurement Division
   One Milesone Center Court
   Germantown, MD 20876-7100
   USA

   Phone: +1 240 404 2227
   barry.constantine@jdsu.com

   Gilles Forget
   Independent Consultant to Bell Canada.
   308, rue de Monaco, St-Eustache
   Qc. CANADA, Postal Code : J7P-4T5

   Phone: (514) 895-8212
   gilles.forget@sympatico.ca

   Rudiger Geib
   Heinrich-Hertz-Strasse (Number: 3-7)
   Darmstadt, Germany, 64295

   Phone: +49 6151 6282747
   Ruediger.Geib@telekom.de

   Reinhard Schrage
   Schrage Consulting

   Phone: +49 (0) 5137 909540
   reinhard@schrageconsult.com