draft-ietf-ippm-tcp-throughput-tm-03.txt   draft-ietf-ippm-tcp-throughput-tm-04.txt 
Network Working Group B. Constantine Network Working Group B. Constantine
Internet-Draft JDSU Internet-Draft JDSU
Intended status: Informational G. Forget Intended status: Informational G. Forget
Expires: December 18, 2010 Bell Canada (Ext. Consultant) Expires: January 9, 2011 Bell Canada (Ext. Consultant)
L. Jorgenson L. Jorgenson
Apparent Networks nooCore
Reinhard Schrage Reinhard Schrage
Schrage Consulting Schrage Consulting
June 8, 2010 July 9, 2010
TCP Throughput Testing Methodology TCP Throughput Testing Methodology
draft-ietf-ippm-tcp-throughput-tm-03.txt draft-ietf-ippm-tcp-throughput-tm-04.txt
Abstract Abstract
This memo describes a methodology for measuring sustained TCP This memo describes a methodology for measuring sustained TCP
throughput performance in an end-to-end managed network environment. throughput performance in an end-to-end managed network environment.
This memo is intended to provide a practical approach to help users This memo is intended to provide a practical approach to help users
validate the TCP layer performance of a managed network, which should validate the TCP layer performance of a managed network, which should
provide a better indication of end-user application level experience. provide a better indication of end-user application level experience.
In the methodology, various TCP and network parameters are identified In the methodology, various TCP and network parameters are identified
that should be tested as part of the network verification at the TCP that should be tested as part of the network verification at the TCP
layer. layer.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Creation date June 8, 2010. Drafts. Creation date July 9, 2010.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on December 18, 2010. This Internet-Draft will expire on January 9, 2011.
Copyright Notice Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 25 skipping to change at page 2, line 25
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the BSD License. described in the BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Goals of this Methodology. . . . . . . . . . . . . . . . . . . 4 2. Goals of this Methodology. . . . . . . . . . . . . . . . . . . 4
2.1 TCP Equilibrium State Throughput . . . . . . . . . . . . . 5 2.1 TCP Equilibrium State Throughput . . . . . . . . . . . . . 5
2.2 Metric for TCP Throughput Tests . . . . . . . . . . . . . 6 2.2 Metrics for TCP Throughput Tests . . . . . . . . . . . . . 6
3. TCP Throughput Testing Methodology . . . . . . . . . . . . . . 7 3. TCP Throughput Testing Methodology . . . . . . . . . . . . . . 6
3.1 Determine Network Path MTU . . . . . . . . . . . . . . . . 8 3.1 Determine Network Path MTU . . . . . . . . . . . . . . . . 8
3.2. Baseline Round-trip Delay and Bandwidth. . . . . . . . . . 9 3.2. Baseline Round-trip Delay and Bandwidth. . . . . . . . . . 9
3.2.1 Techniques to Measure Round Trip Time . . . . . . . . 10 3.2.1 Techniques to Measure Round Trip Time . . . . . . . . 9
3.2.2 Techniques to Measure End-end Bandwidth . . . . . . . 10 3.2.2 Techniques to Measure End-end Bandwidth . . . . . . . 10
3.3. Single TCP Connection Throughput Tests . . . . . . . . . . 11 3.3. TCP Throughput Tests . . . . . . . . . . . . . . . . . . . 10
3.3.1 Interpretation of the Single Connection TCP 3.3.1 Calculate Optimum TCP Window Size. . . . . . . . . . . 11
Throughput Results . . . . . . . . . . . . . . . . . . 14 3.3.2 Conducting the TCP Throughput Tests. . . . . . . . . . 14
3.4. Traffic Management Testing . . . . . . . . . . . . . . . . 14 3.3.3 Single vs. Multiple TCP Connection Testing . . . . . . 14
3.4.1 Multiple TCP Connections - below Link Capacity . . . . 14 3.3.4 Interpretation of the TCP Throughput Results . . . . . 15
3.4.2 Multiple TCP Connections - over Link Capacity. . . . . 15 3.4. Traffic Management Tests . . . . . . . . . . . . . . . . . 15
3.4.3 Interpretation of Multiple TCP Connection Results. . . 16 3.4.1 Traffic Shaping Tests. . . . . . . . . . . . . . . . . 16
3.4.1.1 Interpretation of Traffic Shaping Test Results. . . 17
4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 3.4.2 RED Tests. . . . . . . . . . . . . . . . . . . . . . . 17
5. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4.2.1 Interpretation of RED Results . . . . . . . . . . . 18
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18
5. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20
1. Introduction 1. Introduction
Even though RFC2544 was meant to benchmark network equipment and Even though RFC2544 was meant to benchmark network equipment and
used by network equipment manufacturers (NEMs), network providers used by network equipment manufacturers (NEMs), network providers
have used it to benchmark operational networks in order to have used it to benchmark operational networks in order to
verify SLAs (Service Level Agreements) before turning on a service verify SLAs (Service Level Agreements) before turning on a service
to their business customers. Testing an operational network prior to to their business customers. Testing an operational network prior to
customer activation is referred to as "turn-up" testing and the SLA customer activation is referred to as "turn-up" testing and the SLA
is generally Layer 2/3 packet throughput, delay, loss and is generally Layer 2/3 packet throughput, delay, loss and
jitter. jitter.
Network providers are coming to the realization that RFC2544 testing Network providers are coming to the realization that Layer 2/3 testing
and TCP layer testing are required to more adequately ensure end-user and TCP layer testing are required to more adequately ensure end-user
satisfaction. Therefore, the network provider community desires to satisfaction. Therefore, the network provider community desires to
measure network throughput performance at the TCP layer. Measuring measure network throughput performance at the TCP layer. Measuring
TCP throughput provides a meaningful measure with respect to the end TCP throughput provides a meaningful measure with respect to the end
user's application SLA (and ultimately reach some level of TCP user's application SLA (and ultimately reach some level of TCP
testing interoperability which does not exist today). testing interoperability which does not exist today).
The complexity of the network grows and the various queuing Additionally, end-users (business enterprises) seek to conduct
mechanisms in the network greatly affect TCP layer performance (i.e. repeatable TCP throughput tests between enterprise locations. Since
improper default router settings for queuing, etc.) and devices such these enterprises rely on the networks of the providers, a common test
as firewalls, proxies, load-balancers can actively alter the TCP methodology (and metrics) would be equally beneficial to both parties.
settings as a TCP session traverses the network (such as window size,
MSS, etc.). Network providers (and NEMs) are wrestling with end-end
complexities of the above and there is a strong interest in the
standardization of a test methodology to validate end-to-end TCP
performance (as this is the precursor to acceptable end-user
application performance).
So the intent behind this draft TCP throughput work is to define So the intent behind this draft TCP throughput work is to define
a methodology for testing sustained TCP layer performance. In this a methodology for testing sustained TCP layer performance. In this
document, sustained TCP throughput is that amount of data per unit document, sustained TCP throughput is that amount of data per unit
time that TCP transports during equilibrium (steady state), i.e. time that TCP transports during equilibrium (steady state), i.e.
after the initial slow start phase. We refer to this state as TCP after the initial slow start phase. We refer to this state as TCP
Equilibrium, and that the equalibrium throughput is the maximum Equilibrium, and that the equalibrium throughput is the maximum
achievable for the TCP connection(s). achievable for the TCP connection(s).
One other important note; the precursor to conducting the TCP tests One other important note; the precursor to conducting the TCP tests
test methodlogy is to perform "network stress tests" such as RFC2544 test methodlogy is to perform "network stress tests" such as RFC2544
Layer 2/3 tests or other conventional tests (OWAMP, etc.). It is Layer 2/3 tests or other conventional tests. Examples include
highly recommended to run traditional Layer 2/3 type test to verify OWAMP or manual packet layer test techniques where packet throughput,
the integrity of the network before conducting TCP testing. loss, and delay measurements are conducted. It is highly recommended
to run traditional Layer 2/3 type test to verify the integrity of the
network before conducting TCP tests.
2. Goals of this Methodology 2. Goals of this Methodology
Before defining the goals of this methodology, it is important to Before defining the goals of this methodology, it is important to
clearly define the areas that are not intended to be measured or clearly define the areas that are not intended to be measured or
analyzed by such a methodology. analyzed by such a methodology.
- The methodology is not intended to predict TCP throughput - The methodology is not intended to predict TCP throughput
behavior during the transient stages of a TCP connection, such behavior during the transient stages of a TCP connection, such
as initial slow start. as initial slow start.
skipping to change at page 4, line 36 skipping to change at page 4, line 35
are to define a method to conduct a structured, end-to-end are to define a method to conduct a structured, end-to-end
assessment of sustained TCP performance within a managed business assessment of sustained TCP performance within a managed business
class IP network. A key goal is to establish a set of "best class IP network. A key goal is to establish a set of "best
practices" that an engineer should apply when validating the practices" that an engineer should apply when validating the
ability of a managed network to carry end-user TCP applications. ability of a managed network to carry end-user TCP applications.
Some specific goals are to: Some specific goals are to:
- Provide a practical test approach that specifies the more well - Provide a practical test approach that specifies the more well
understood (and end-user configurable) TCP parameters such as Window understood (and end-user configurable) TCP parameters such as Window
size, MSS, # connections, and how these affect the outcome of TCP size, MSS (Maximum Segment Size), # connections, and how these affect
performance over a network the outcome of TCP performance over a network.
- Provide specific test conditions (link speed, RTT, window size, - Provide specific test conditions (link speed, RTT, window size,
etc.) and maximum achievable TCP throughput under TCP Equilbrium etc.) and maximum achievable TCP throughput under TCP Equilbrium
conditions. For guideline purposes, provide examples of these test conditions. For guideline purposes, provide examples of these test
conditions and the maximum achievable TCP throughput during the conditions and the maximum achievable TCP throughput during the
equilbrium state. Section 2.1 provides specific details concerning equilbrium state. Section 2.1 provides specific details concerning
the definition of TCP Equilibrium within the context of this draft. the definition of TCP Equilibrium within the context of this draft.
- Define two (2) basic metrics that can be used to compare the
performance of TCP connections under various network conditions
- In test situations where the recommended procedure does not yield - In test situations where the recommended procedure does not yield
the maximum achievable TCP throughput result, this draft provides some the maximum achievable TCP throughput result, this draft provides some
possible areas within the end host or network that should be possible areas within the end host or network that should be
considered for investigation (although again, this draft is not considered for investigation (although again, this draft is not
intended to provide a detailed diagnosis of these issues) intended to provide a detailed diagnosis of these issues)
2.1 TCP Equilibrium State Throughput 2.1 TCP Equilibrium State Throughput
TCP connections have three (3) fundamental congestion window phases TCP connections have three (3) fundamental congestion window phases
as documented in RFC2581. These states are: as documented in RFC2581. These states are:
skipping to change at page 6, line 10 skipping to change at page 6, line 10
congestion avoidance before packet loss conditions occur (which would congestion avoidance before packet loss conditions occur (which would
cause the state change from congestion avoidance to a retransmission cause the state change from congestion avoidance to a retransmission
phase). All maximum achievable throughputs specified in Section 3 are phase). All maximum achievable throughputs specified in Section 3 are
with respect to this Equilibrium state. with respect to this Equilibrium state.
2.2 Metrics for TCP Throughput Tests 2.2 Metrics for TCP Throughput Tests
This draft focuses on a TCP throughtput methodology and also This draft focuses on a TCP throughtput methodology and also
provides two basic metrics to compare results of various throughput provides two basic metrics to compare results of various throughput
tests. It is recognized that the complexity and unpredictability of tests. It is recognized that the complexity and unpredictability of
TCP makes it impossible to develop a set of metrics that account for TCP makes it impossible to develop a complete set of metrics that
the myriad of variables (i.e. RTT variation, loss conditions, TCP account for the myriad of variables (i.e. RTT variation, loss
implementation, etc.). However, these two basic metrics are useful to conditions, TCP implementation, etc.). However, these two basic
compare network traffic management techniques, especially in section metrics faciliate TCP throughput comparisons under varying network
3.4 of this document (Traffic Management Tests). conditions and between network traffic management techniques.
The TCP Efficiency metric is the percentage of bytes that were not The TCP Efficiency metric is the percentage of bytes that were not
retransmitted and is defined as: retransmitted and is defined as:
Transmitted Bytes - Retransmitted Bytes Transmitted Bytes - Retransmitted Bytes
--------------------------------------- x 100 --------------------------------------- x 100
Transmitted Bytes Transmitted Bytes
This metric is easy to understand and provides a comparative measure This metric provides a comparative measure between various QoS
between various QoS mechanisms such as traffic management, congestion mechanisms such as traffic management, congestion avoidance, and also
avoidance, and also various TCP implementations (i.e. Reno, Vegas, various TCP implementations (i.e. Reno, Vegas, etc.).
etc.).
The second measure is also basic and is the TCP Transfer Time, As an example, if 1000 TCP segments were sent and 20 had to be
which is simply the time is takes to transfer a block of data across retransmitted, the TCP Efficiency would be calculated as:
simultaneous TCP connections. The concept is useful to benchmark
traffic management tests, where multiple connections are required and 1000 - 20
it simplifies comparing results of different approaches. An --------- x 100 = 98%
example would be the bulk transfer of 10 MB upon 8 separate TCP 1000
connections (each connection uploading 10 MB). Each connection may
achieve different throughputs during a test and the overall throughput The second metric is the TCP Transfer Time, which is simply the time
rate is not always easy to determine (especially as the number of it takes to transfer a block of data across simultaneous TCP
connections increases). But by defining the Transfer Time as that of connections. The concept is useful when benchmarking traffic
the successful transfer of 10MB over all 8 connections, the single management techniques, where multiple connections are generally
transfer time metric is very useful means to rate various traffic required. An example would be the bulk transfer of 10 MB upon 8
management techniques (i.e. FiFO, WFQ queuing, WRED, etc.). separate TCP connections (each connection uploading 10 MB). Each
connection may achieve different throughputs during a test and the
overall throughput rate is not always easy to determine (especially as
the number of connections increases). But by defining the TCP Transfer
Time as the total transfer time of 10MB over all 8 connections, the
single transfer time metric is a useful means to compare various
traffic management techniques (i.e. FiFO, WFQ queuing, WRED, etc.).
3. TCP Throughput Testing Methodology 3. TCP Throughput Testing Methodology
This section summarizes the specific test methodology to achieve the This section summarizes the specific test methodology to achieve the
goals listed in Section 2. goals listed in Section 2.
As stated in Section 1, it is considered best practice to verify As stated in Section 1, it is considered best practice to verify
the integrity of the network by conducting Layer2/3 stress tests the integrity of the network by conducting Layer2/3 stress tests
such as RFC2544 or other methods of network stress tests. If the such as RFC2544 (or other methods of network stress tests). If the
network is not performing properly in terms of packet loss, jitter, network is not performing properly in terms of packet loss, jitter,
etc. then the TCP layer testing will not be meaningful since the etc. then the TCP layer testing will not be meaningful since the
equalibrium throughput would be very difficult to achieve (in a equalibrium throughput would be very difficult to achieve (in a
"dysfunctional" network). "dysfunctional" network).
The following provides the sequential order of steps to conduct the The following represents the sequential order of steps to conduct the
TCP throughput testing methodology: TCP throughput testing methodology:
1. Identify the Path MTU. Packetization Layer Path MTU Discovery 1. Identify the Path MTU. Packetization Layer Path MTU Discovery
or PLPMTUD (RFC4821) should be conducted to verify the minimum network or PLPMTUD (RFC4821) should be conducted to verify the minimum network
path MTU. Conducting PLPMTUD establishes the upper limit for the MSS path MTU. Conducting PLPMTUD establishes the upper limit for the MSS
to be used in subsequent steps. to be used in subsequent steps.
2. Baseline Round-trip Delay and Bandwidth. These measurements provide 2. Baseline Round-trip Delay and Bandwidth. These measurements provide
estimates of the ideal TCP window size, which will be used in estimates of the ideal TCP window size, which will be used in
subsequent test steps. subsequent test steps.
3. Single TCP Connection Throughput Tests. With baseline measurements 3. TCP Connection Throughput Tests. With baseline measurements
of round trip delay and bandwidth, a series of single connection TCP of round trip delay and bandwidth, a series of single and multiple TCP
throughput tests can be conducted to baseline the performance of the connection throughput tests can be conducted to baseline the network
network against expectations. performance expectations.
4. Traffic Management Tests. Various traffic management and queuing 4. Traffic Management Tests. Various traffic management and queuing
techniques are tested in this step, using multiple TCP connections. techniques are tested in this step, using multiple TCP connections.
Multiple connection testing can verify that the network is configured Multiple connection testing can verify that the network is configured
properly for traffic shaping versus policing, various queuing properly for traffic shaping versus policing, various queuing
implementations, and RED. implementations, and RED.
Important to note are some of the key characteristics and Important to note are some of the key characteristics and
considerations for the TCP test instrument. The test host may be a considerations for the TCP test instrument. The test host may be a
standard computer or dedicated communications test instrument standard computer or dedicated communications test instrument
and these TCP test hosts be capable of emulating both a client and a and these TCP test hosts be capable of emulating both a client and a
server. As a general rule of thumb, testing TCP throughput at rates server.
greater than 250-500 Mbit/sec generally requires high performance
server hardware or dedicated hardware based test tools.
Whether the TCP test host is a standard computer or dedicated test Whether the TCP test host is a standard computer or dedicated test
instrument, the following areas should be considered when selecting instrument, the following areas should be considered when selecting
a test host: a test host:
- TCP implementation used by the test host OS, i.e. Linux OS kernel - TCP implementation used by the test host OS, i.e. Linux OS kernel
using TCP Reno, TCP options supported, etc. This will obviously be using TCP Reno, TCP options supported, etc. This will obviously be
more important when using custom test equipment where the TCP more important when using custom test equipment where the TCP
implementation may be customized or tuned to run in higher implementation may be customized or tuned to run in higher
performance hardware performance hardware
skipping to change at page 7, line 45 skipping to change at page 8, line 4
Whether the TCP test host is a standard computer or dedicated test Whether the TCP test host is a standard computer or dedicated test
instrument, the following areas should be considered when selecting instrument, the following areas should be considered when selecting
a test host: a test host:
- TCP implementation used by the test host OS, i.e. Linux OS kernel - TCP implementation used by the test host OS, i.e. Linux OS kernel
using TCP Reno, TCP options supported, etc. This will obviously be using TCP Reno, TCP options supported, etc. This will obviously be
more important when using custom test equipment where the TCP more important when using custom test equipment where the TCP
implementation may be customized or tuned to run in higher implementation may be customized or tuned to run in higher
performance hardware performance hardware
- Most importantly, the TCP test host must be capable of generating - Most importantly, the TCP test host must be capable of generating
and receiving stateful TCP test traffic at the full link speed of the and receiving stateful TCP test traffic at the full link speed of the
network under test. This requirement is very serious and may require network under test. As a general rule of thumb, testing TCP throughput
custom test equipment, especially on 1 GigE and 10 GigE networks. at rates greater than 100 Mbit/sec generally requires high
performance server hardware or dedicated hardware based test tools.
3.1. Determine Network Path MTU 3.1. Determine Network Path MTU
TCP implementations should use Path MTU Discovery techniques (PMTUD). TCP implementations should use Path MTU Discovery techniques (PMTUD).
PMTUD relies on ICMP 'need to frag' messages to learn the path MTU. PMTUD relies on ICMP 'need to frag' messages to learn the path MTU.
When a device has a packet to send which has the Don't Fragment (DF) When a device has a packet to send which has the Don't Fragment (DF)
bit in the IP header set and the packet is larger than the Maximum bit in the IP header set and the packet is larger than the Maximum
Transmission Unit (MTU) of the next hop link, the packet is dropped Transmission Unit (MTU) of the next hop link, the packet is dropped
and the device sends an ICMP 'need to frag' message back to the host and the device sends an ICMP 'need to frag' message back to the host
that originated the packet. The ICMP 'need to frag' message includes that originated the packet. The ICMP 'need to frag' message includes
skipping to change at page 9, line 42 skipping to change at page 9, line 50
day. The goal would be to determine a representative minimum, average, day. The goal would be to determine a representative minimum, average,
and maximum RTD and bandwidth for the network under test. Topology and maximum RTD and bandwidth for the network under test. Topology
changes are to be avoided during this time of initial convergence changes are to be avoided during this time of initial convergence
(e.g. in crossing BGP4 boundaries). (e.g. in crossing BGP4 boundaries).
In some cases, baselining bandwidth may not be required, since a In some cases, baselining bandwidth may not be required, since a
network provider's end-to-end topology may be well enough defined. network provider's end-to-end topology may be well enough defined.
3.2.1 Techniques to Measure Round Trip Time 3.2.1 Techniques to Measure Round Trip Time
We follow in the definitions used in the references of the appendix; Following the definitions used in the references of the appendix;
hence Round Trip Time (RTT) is the time elapsed between the clocking Round Trip Time (RTT) is the time elapsed between the clocking in of
in of the first bit of a payload packet to the receipt of the last the first bit of a payload packet to the receipt of the last bit of the
bit of the corresponding acknowledgement. Round Trip Delay (RTD) corresponding acknowledgement. Round Trip Delay (RTD) is used
is used synonymously to twice the Link Latency. synonymously to twice the Link Latency.
In any method used to baseline round trip delay between network In any method used to baseline round trip delay between network
end-points, it is important to realize that network latency is the end-points, it is important to realize that network latency is the
sum of inherent network delay and congestion. The RTT should be sum of inherent network delay and congestion. The RTT should be
baselined during "off-peak" hours to obtain a reliable figure for baselined during "off-peak" hours to obtain a reliable figure for
network latency (versus additional delay caused by congestion). network latency (versus additional delay caused by congestion).
During the actual sustained TCP throughput tests, it is critical During the actual sustained TCP throughput tests, it is critical
to measure RTT along with measured TCP throughput. Congestive to measure RTT along with measured TCP throughput. Congestive
effects can be isolated if RTT is concurrently measured effects can be isolated if RTT is concurrently measured.
This is not meant to provide an exhaustive list, but summarizes some This is not meant to provide an exhaustive list, but summarizes some
of the more common ways to determine round trip time (RTT) through of the more common ways to determine round trip time (RTT) through
the network. The desired resolution of the measurement (i.e. msec the network. The desired resolution of the measurement (i.e. msec
versus usec) may dictate whether the RTT measurement can be achieved versus usec) may dictate whether the RTT measurement can be achieved
with standard tools such as ICMP ping techniques or whether with standard tools such as ICMP ping techniques or whether
specialized test equipment would be required with high precision specialized test equipment would be required with high precision
timers. The objective in this section is to list several techniques timers. The objective in this section is to list several techniques
in order of decreasing accuracy. in order of decreasing accuracy.
skipping to change at page 11, line 5 skipping to change at page 11, line 11
various intervals throughout a business day (or even across a week). various intervals throughout a business day (or even across a week).
Ideally, the bandwidth test should produce a log output of the Ideally, the bandwidth test should produce a log output of the
bandwidth achieved across the test interval AND the round trip delay. bandwidth achieved across the test interval AND the round trip delay.
And during the actual TCP level performance measurements (Sections And during the actual TCP level performance measurements (Sections
3.3 - 3.5), the test tool must be able to track round trip time 3.3 - 3.5), the test tool must be able to track round trip time
of the TCP connection(s) during the test. Measuring round trip time of the TCP connection(s) during the test. Measuring round trip time
variation (aka "jitter") provides insight into effects of congestive variation (aka "jitter") provides insight into effects of congestive
delay on the sustained throughput achieved for the TCP layer test. delay on the sustained throughput achieved for the TCP layer test.
3.3. Single TCP Connection Throughput Tests 3.3. TCP Throughput Tests
This draft specifically defines TCP throughput techniques to verify This draft specifically defines TCP throughput techniques to verify
sustained TCP performance in a managed business network. Defined sustained TCP performance in a managed business network. Defined
in section 2.1, the equalibrium throughput reflects the maximum in section 2.1, the equalibrium throughput reflects the maximum
rate achieved by a TCP connection within the congestion avoidance rate achieved by a TCP connection within the congestion avoidance
phase on a end-end network path. This section and others will define phase on a end-end network path. This section and others will define
the method to conduct these sustained throughput tests and guidelines the method to conduct these sustained throughput tests and guidelines
of the predicted results. of the predicted results.
With baseline measurements of round trip time and bandwidth With baseline measurements of round trip time and bandwidth
from section 3.2, a series of single connection TCP throughput tests from section 3.2, a series of single and multiple TCP connection
can be conducted to baseline the performance of the network against throughput tests can be conducted to baseline network performance
expectations. The optimum TCP window size can be calculated from against expectations.
the bandwidth delay product (BDP), which is:
BDP = RTT x Bandwidth 3.3.1 Calculate Optimum TCP Window Size
The optimum TCP window size can be calculated from the bandwidth delay
product (BDP), which is:
BDP (bits) = RTT (sec) x Bandwidth (bps)
By dividing the BDP by 8, the "ideal" TCP window size is calculated. By dividing the BDP by 8, the "ideal" TCP window size is calculated.
An example would be a T3 link with 25 msec RTT. The BDP would equal An example would be a T3 link with 25 msec RTT. The BDP would equal
~1,105,000 bits and the ideal TCP window would equal ~138,000 bytes. ~1,105,000 bits and the ideal TCP window would equal ~138,000 bytes.
The following table provides some representative network link speeds, The following table provides some representative network link speeds,
latency, BDP, and associated "optimum" TCP window size. Sustained latency, BDP, and associated "optimum" TCP window size. Sustained
TCP transfers should reach nearly 100% throughput, minus the overhead TCP transfers should reach nearly 100% throughput, minus the overhead
of Layers 1-3 and the divisor of the MSS into the window. of Layers 1-3 and the divisor of the MSS into the window.
For this single connection baseline test, the MSS size will effect For this single connection baseline test, the MSS size will effect
the achieved throughput (especially for smaller TCP window sizes). the achieved throughput (especially for smaller TCP window sizes).
Table 3.2 provides the achievable, equalibrium TCP Table 3.2 provides the achievable, equalibrium TCP throughput (at
throughput (at Layer 4) using 1000 byte MSS. Also in this table, Layer 4) using 1460 byte MSS. Also in this table, the case of 58 byte
the case of 58 byte L1-L4 overhead including the Ethernet CRC32 is L1-L4 overhead including the Ethernet CRC32 is used for simplicity.
used for simplicity.
Table 3.2: Link Speed, RTT and calculated BDP, TCP Throughput Table 3.2: Link Speed, RTT and calculated BDP, TCP Throughput
Link Ideal TCP Maximum Achievable Link Ideal TCP Maximum Achievable
Speed* RTT (ms) BDP (bits) Window (kbytes) TCP Throughput(Mbps) Speed* RTT (ms) BDP (bits) Window (kbytes) TCP Throughput(Mbps)
---------------------------------------------------------------------- ----------------------------------------------------------------------
T1 20 30,720 3.84 1.20 T1 20 30,720 3.84 1.17
T1 50 76,800 9.60 1.44 T1 50 76,800 9.60 1.40
T1 100 153,600 19.20 1.44 T1 100 153,600 19.20 1.40
T3 10 442,100 55.26 41.60 T3 10 442,100 55.26 42.05
T3 15 663,150 82.89 41.13 T3 15 663,150 82.89 42.05
T3 25 1,105,250 138.16 41.92 T3 25 1,105,250 138.16 41.52
T3(ATM) 10 407,040 50.88 32.44 T3(ATM) 10 407,040 50.88 36.50
T3(ATM) 15 610,560 76.32 32.44 T3(ATM) 15 610,560 76.32 36.23
T3(ATM) 25 1,017,600 127.20 32.44 T3(ATM) 25 1,017,600 127.20 36.27
100M 1 100,000 12.50 90.699 100M 1 100,000 12.50 91.98
100M 2 200,000 25.00 92.815 100M 2 200,000 25.00 93.44
100M 5 500,000 62.50 93.44
Link Ideal TCP Maximum Achievable 1Gig 0.1 100,000 12.50 919.82
Speed* RTT (ms) BDP (bits) Window (kbytes) TCP Throughput (Mbps) 1Gig 0.5 500,000 62.50 934.47
---------------------------------------------------------------------- 1Gig 1 1,000,000 125.00 934.47
100M 5 500,000 62.50 90.699 10Gig 0.05 500,000 62.50 9,344.67
1Gig 0.1 100,000 12.50 906.991 10Gig 0.3 3,000,000 375.00 9,344.67
1Gig 0.5 500,000 62.50 906.991
1Gig 1 1,000,000 125.00 906.991
10Gig 0.05 500,000 62.50 9,069.912
10Gig 0.3 3,000,000 375.00 9,069.912
* Note that link speed is the minimum link speed throughput a network; * Note that link speed is the minimum link speed throughput a network;
i.e. WAN with T1 link, etc. i.e. WAN with T1 link, etc.
Also, the following link speeds (available payload bandwidth) were Also, the following link speeds (available payload bandwidth) were
used for the WAN entries: used for the WAN entries:
- T1 = 1.536 Mbits/sec (B8ZS line encoding facility) - T1 = 1.536 Mbits/sec (B8ZS line encoding facility)
- T3 = 44.21 Mbits/sec (C-Bit Framing) - T3 = 44.21 Mbits/sec (C-Bit Framing)
- T3(ATM) = 36.86 Mbits/sec (C-Bit Framing & PLCP, 96000 Cells per - T3(ATM) = 36.86 Mbits/sec (C-Bit Framing & PLCP, 96000 Cells per
skipping to change at page 12, line 43 skipping to change at page 13, line 5
the value determined in step 1 with the MSS & (MSS + L2 + L3 + L4 the value determined in step 1 with the MSS & (MSS + L2 + L3 + L4
Overheads) divided by the RTT. Overheads) divided by the RTT.
3 - Finally, we multiply the calculated value of step 2 by the MSS 3 - Finally, we multiply the calculated value of step 2 by the MSS
versus (MSS + L2 + L3 + L4 Overheads) ratio. versus (MSS + L2 + L3 + L4 Overheads) ratio.
This gives us the achievable TCP Throughput value. Sometimes, the This gives us the achievable TCP Throughput value. Sometimes, the
maximum achievable throughput is limited by the maximum achievable maximum achievable throughput is limited by the maximum achievable
quantity of Ethernet Frames per second on the physical media. Then quantity of Ethernet Frames per second on the physical media. Then
this value is used in step 2 instead of the calculated one. this value is used in step 2 instead of the calculated one.
There are several TCP tools that are commonly used in the network The following diagram compares achievable TCP throughputs on a T3 link
provider world and one of the most common is the "iperf" tool. With with Windows 2000/XP TCP window sizes of 16KB versus 64KB.
this tool, hosts are installed at each end of the network segment;
one as client and the other as server. The TCP Window size of both
the client and the server can be maunally set and the achieved
throughput is measured, either uni-directionally or bi-directionally.
For higher BDP situations in lossy networks (long fat networks or
satellite links, etc.), TCP options such as Selective Acknowledgment
should be considered and also become part of the window
size / throughput characterization.
The following diagram shows the achievable TCP throughput on a T3 with
the default Windows2000/XP TCP Window size of 17520 Bytes.
45| 45|
| | _____42.1M
40| 40| |64K|
TCP | TCP | | |
Throughput 35| Throughput 35| | | _____34.3M
in Mbps | in Mbps | | | |64K|
30| 30| | | | |
| | | | | |
25| 25| | | | |
| | | | | |
20| 20| | | | | _____20.5M
| | | | | | |64K|
15| _______ 14.48M 15| 14.5M____| | | | | |
| | | | |16K| | | | | |
10| | | +-----+ 9.65M 10| | | | 9.6M+---+ | | |
| | | | | _______ 5.79M | | | | |16K| | 5.8M____+ |
5| | | | | | | 5| | | | | | | |16K| |
|_________+_____+_________+_____+________+____ +___________ |______+___+___+_______+___+___+_______+__ +___+_______
10 15 25 10 15 25
RTT in milliseconds RTT in milliseconds
The following diagram shows the achievable TCP throughput on a 25ms T3 The following diagram shows the achievable TCP throughput on a 25ms T3
when the TCP Window size is increased and with the RFC1323 TCP Window when the TCP Window size is increased and with the RFC1323 TCP Window
scaling option. scaling option.
45| 45|
| +-----+42.47M | +-----+42.47M
40| | | 40| | |
TCP | | | TCP | | |
skipping to change at page 13, line 52 skipping to change at page 13, line 52
| ______ 21.23M | | | ______ 21.23M | |
20| | | | | 20| | | | |
| | | | | | | | | |
15| | | | | 15| | | | |
| | | | | | | | | |
10| +----+10.62M | | | | 10| +----+10.62M | | | |
| _______5.31M | | | | | | | _______5.31M | | | | | |
5| | | | | | | | | 5| | | | | | | | |
|__+_____+______+____+___________+____+________+_____+___ |__+_____+______+____+___________+____+________+_____+___
16 32 64 128 16 32 64 128
TCP Window size in Kilo Bytes TCP Window size in KBytes
The single connection TCP throughput test must be run over a 3.3.2 Conducting the TCP Throughput Tests
a long duration and results must be logged at the desired interval.
The test must record RTT and TCP retransmissions at each interval. There are several TCP tools that are commonly used in the network
world and one of the most common is the "iperf" tool. With this tool,
hosts are installed at each end of the network segment; one as client
and the other as server. The TCP Window size of both the client and
the server can be maunally set and the achieved throughput is measured,
either uni-directionally or bi-directionally. For higher BDP
situations in lossy networks (long fat networks or satellite links,
etc.), TCP options such as Selective Acknowledgment should be
considered and also become part of the window size / throughput
characterization.
Host hardware performance must be well understood before conducting
the TCP throughput tests and other tests in the following sections.
Dedicated test equipment will generally be required, especially for
line rates of GigE and 10 GigE.
The TCP throughput test should be run over a a long enough duration
to properly exercise network buffers and also characterize performance
during different time periods of the day. The results must be logged
at the desired interval and the test must record RTT and TCP
retransmissions at each interval.
This correlation of retransmissions and RTT over the course of the This correlation of retransmissions and RTT over the course of the
test will clearly identify which portions of the transfer reached test will clearly identify which portions of the transfer reached
TCP Equilbrium state and to what effect increased RTT (congestive TCP Equilbrium state and to what effect increased RTT (congestive
effects) may have been the cause of reduced equilibrium performance. effects) may have been the cause of reduced equilibrium performance.
Host hardware performance must be well understood before conducting Additionally, the TCP Efficiency and TCP Transfer time metrics should
this TCP single connection test and other tests in this section. be logged in order to further characterize the window size tests.
Dedicated test equipment may be required, especially for line rates
of GigE and 10 GigE.
3.3.1 Interpretation of the Single Connection TCP Throughput Results 3.3.3 Single vs. Multiple TCP Connection Testing
The decision whether to conduct single or multiple TCP connection
tests depends upon the size of the BDP in relation to the window sizes
configured in the end-user environment. For example, if the BDP for a
long-fat pipe turns out to be 2MB, then it is probably more realistic
to test this pipe with multiple connections. Assuming typical host
computer window settings of 64 KB, using 32 connections would
realistically test this pipe.
The following table is provided to illustrate the relationship of the
BDP, window size, and the number of connections required to utilize the
the available capacity. For this example, the network bandwidth is
500 Mbps, RTT is equal to 5 ms, and the BDP equates to 312 KBytes.
#Connections
Window to Fill Link
------------------------
16KB 20
32KB 10
64KB 5
128KB 3
The TCP Transfer Time metric is useful for conducting multiple
connection tests. Each connection should be configured to transfer
a certain payload (i.e. 100 MB), and the TCP Transfer time provides
a simple metric to verify the actual versus expected results.
Note that the TCP transfer time is the time for all connections to
complete the transfer of the configured payload size. From the
example table listed above, the 64KB window is considered. Each of
the 5 connections would be configured to transfer 100MB, and each
TCP should obtain a maximum of 100 Mb/sec per connection. So for this
example, the 100MB payload should be transferred across the connections
in approximately 8 seconds (which would be the ideal TCP transfer time
for these conditions).
Additionally, the TCP Efficiency metric should be computed for each
connection tested (defined in section 2.2).
3.3.4 Interpretation of the TCP Throughput Results
At the end of this step, the user will document the theoretical BDP At the end of this step, the user will document the theoretical BDP
and a set of Window size experiments with measured TCP throughput for and a set of Window size experiments with measured TCP throughput for
each TCP window size setting. For cases where the sustained TCP each TCP window size setting. For cases where the sustained TCP
throughput does not equal the predicted value, some possible causes throughput does not equal the predicted value, some possible causes
are listed: are listed:
- Network congestion causing packet loss - Network congestion causing packet loss; the TCP Efficiency metric
- Network congestion not causing packet loss, but effectively is a useful gauge to compare network performance
increasing the size of the required TCP window during the transfer - Network congestion not causing packet loss but increasing RTT
- Intermediate network devices which actively regenerate the TCP - Intermediate network devices which actively regenerate the TCP
connection and can alter window size, MSS, etc. connection and can alter window size, MSS, etc.
- Over utilization of available link or rate limiting (policing). More
discussion of traffic management tests follows in section 3.4
3.4. Traffic Management Tests 3.4. Traffic Management Tests
After baselining the network under test with a single TCP connection In most cases, the network connection between two geographic locations
(Section 3.3), the nominal capacity of the network has been (branch offices, etc.) is lower than the network connection of the
determined. The capacity measured in section 3.3 may be a capacity host computers. An example would be LAN connectivity of GigE and
range and it is reasonable that some level of tuning may have been WAN connectivity of 100 Mbps. The WAN connectivity may be physically
required (i.e. router shaping techniques employed, intermediary 100 Mbps or logically 100 Mbps (over a GigE WAN connection). In the
proxy like devices tuned, etc.). later case, rate limiting is used to provide the WAN bandwidth per the
SLA.
Single connection TCP testing is a useful first step to measure Traffic management techniques are employed to provide various forms of
expected versus actual TCP performance and as a means to diagnose QoS, the more common include:
/ tune issues in the network and active elements. However, the
ultimate goal of this methodology is to more closely emulate customer
traffic, which comprise many TCP connections over a network link.
3.4.1 Multiple TCP Connections - below Link Capacity - Traffic Shaping
- Priority Queuing
- Random Early Discard (RED, etc.)
First, the ability of the network to carry multiple TCP connections Configuring the end-end network with these various traffic management
to full network capacity should be tested. Prioritization and QoS mechanisms is a complex under-taking. For traffic shaping and RED
settings are not considered during this step, since the network techniques, the end goal is to provide better performance for bursty
capacity is not to be exceeded by the test traffic (section 3.5.2 traffic such as TCP (RED is specifically intended for TCP).
covers the over capacity test case).
For this multiple connection TCP throughput test, the number of This section of the methodology provides guidelines to test traffic
connections will more than likely be limited by the test tool (host shaping and RED implementations. As in section 3.3, host hardware
vs. dedicated test equipment). As an example, for a GigE link with performance must be well understood before conducting the traffic
1 msec RTT, the optimum TCP window would equal ~128 KBytes. So under shaping and RED tests. Dedicated test equipment will generally be
this condition, 8 concurrent connections with window size equal to required, especially for line rates of GigE and 10 GigE.
16KB would fill the GigE link. For 10G, 80 connections would be
required to accomplish the same.
Just as in section 3.3, the end host or test tool can not be the 3.4.1 Traffic Shaping Tests
processing bottleneck or the throughput measurements will not be
valid. The test tool must be benchmarked in ideal lab conditions to
verify it's ability to transfer stateful TCP traffic at the given
network line rate.
For this test step, it should be conducted over a reasonable test For services where the available bandwidth is rate limited, there are
duration and results should be logged per interval such as throughput two (2) techniques used to implement rate limiting: traffic policing
per connection, RTT, and retransmissions. and traffic shaping.
Since the network is not to be driven into over capacity (by nature Simply stated, traffic policing marks and/or drops packets which
of the BDP allocated evenly to each connection), this test verifies exceed the SLA bandwidth (in most cases, excess traffic is dropped).
the ability of the network to carry multiple TCP connections up to Traffic shaping employs the use of queues to smooth the bursty
the link speed of the network. traffic and then send out within the SLA bandwidth limit (without
dropping packets unless the traffic shaping queue is exceeded).
3.4.2 Multiple TCP Connections - over Link Capacity Traffic shaping is generally configured for TCP data services and
can provide improved TCP performance since the retransmissions are
reduced, which in turn optimizes TCP throughput for the given
available bandwidth. Through this section, the available rate-limited
bandwidth shall be referred to as the "bottleneck bandwidth".
In this step, the network bandwidth is intentionally exceeded with The ability to detect proper traffic shaping is more easily diagnosed
multiple TCP connections to test expected prioritization and queuing when conducting a multiple TCP connection test. Proper shaping will
within the network. provide a fair distribution of the available bottleneck bandwidth,
while traffic policing will not.
All conditions related to Section 3.3 set-up apply, especially the The traffic shaping tests build upon the concepts of multiple
ability of the test hosts to transfer stateful TCP traffic at network connection testing as defined in section 3.3.3. Calculating the BDP
line rates. for the bottleneck bandwidth is first required and then selecting
the number of connections / window size per connection.
Using the same example from Section 3.3, a GigE link with 1 msec Similar to the example in section 3.3, a typical test scenario might
RTT would require a window size of 128 KB to fill the link (with be: GigE LAN with a 100Mbps bottleneck bandwidth (rate limited logical
one TCP connection). Assuming a 16KB window, 8 concurrent interface), and 5 msec RTT. This would require five (5) TCP
connections would fill the GigE link capacity and values higher than connections of 64 KB window size evenly fill the bottleneck bandwidth
8 would over-subscribe the network capacity. The user would select (about 100 Mbps per connection).
values to over-subscribe the network (i.e. possibly 10 15, 20, etc.)
to conduct experiments to verify proper prioritization and queuing
within the network.
3.4.3 Interpretation of Multiple TCP Connection Test Restults The traffic shaping should be run over a long enough duration to
properly exercise network buffers and also characterize performance
during different time periods of the day. The throughput of each
connection must be logged during the entire test, along with the TCP
Efficiency and TCP Transfer time metric. Additionally, it is
recommended to log RTT and retransmissions per connection over the test
interval.
Without any prioritization in the network, the over subscribed test 3.4.1.1 Interpretation of Traffic Shaping Test Restults
results could assist in the queuing studies. With proper queuing,
the bandwidth should be shared in a reasonable manner. The author
understands that the term "reasonable" is too wide open, and future
draft versions of this memo would attempt to quantify this sharing
in more tangible terms. It is known that if a network element
is not set for proper queuing (i.e. FIFO), then an oversubscribed
TCP connection test will generally show a very uneven distribution of
bandwidth.
With prioritization in the network, different TCP connections can be By plotting the throughput achieved by each TCP connection, the fair
assigned various QoS settings via the various mechanisms (i.e. per sharing of the bandwidth is generally very obvious when traffic shaping
VLAN, DSCP, etc.), and the higher priority connections must be is properly configured for the bottleneck interface. For the previous
verified to achieve the expected throughput. example of 5 connections sharing 500 Mbps, each connection would
consume ~100 Mbps with a smooth variation. If traffic policing was
present on the bottleneck interface, the bandwidth sharing would not
be fair and the resulting throughput plot would reveal "spikey"
connection throughput consumption of the competing TCP connections
(due to the retransmissions).
3.4.2 RED Tests
Random Early Discard techniques are specifically targeted to provide
congestion avoidance for TCP traffic. Before the network element queue
"fills" and enters the tail drop state, RED drops packets at
configurable queue depth thresholds. This action causes TCP
connections to back-off which helps to prevent tail drop, which in
turn helps to prevent global TCP synchronization.
Again, rate limited interfaces can benefit greatly from RED based
techniques. Without RED, TCP is generally not able to achieve the full
bandwidth of the bottleneck interface. With RED enabled, TCP
congestion avoidance throttles the connections on the higher speed
interface (i.e. LAN) and can reach equalibrium with the bottleneck
bandwidth (achieving closer to full throughput).
The ability to detect proper RED configuration is more easily diagnosed
when conducting a multiple TCP connection test. Multiple TCP
connections provide the multiple bursty sources that emulate the
real-world conditions for which RED was intended.
The RED tests also build upon the concepts of multiple connection
testing as defined in secion 3.3.3. Calculating the BDP for the
bottleneck bandwidth is first required and then selecting the number of
connections / window size per connection.
For RED testing, the desired effect is to cause the TCP connections to
burst beyond the bottleneck bandwidth so that queue drops will occur.
Using the same example from section 3.4.1 (traffic shaping), the
500 Mbps bottleneck bandwidth requires 5 TCP connections (with window
size of 64Kb) to fill the capacity. Some experimentation is required,
but it is recommended to start with double the number of connections
to stress the network element buffers / queues. In this example, 10
connections would produce TCP bursts of 64KB for each connection.
If the timing of the TCP tester permits, these TCP bursts could stress
queue sizes in the 512KB range. Again experimentation will be required
and the proper number of TCP connections / window size will be dictated
by the size the network element queue.
3.4.2.1 Interpretation of RED Results
The default queuing technique for most network devices is FIFO based.
Without RED, the FIFO based queue will cause excessive loss to all of
the TCP connections and in the worst case global TCP synchronization.
By plotting the aggregate throughput achieved on the bottleneck
interface, proper RED operation can be determined if the bottleneck
bandwidth is fully utilized. For the previous example of 10
connections (window = 64 KB) sharing 500 Mbps, each connection should
consume ~50 Mbps. If RED was not properly enabled on the interface,
then the TCP connections will retransmit at a higher rate and the net
effect is that the bottleneck bandwidth is not fully utilized.
Another means to study non-RED versus RED implementation is to use
the TCP Transfer Time metric for all of the connections. In this
example, a 100 MB payload transfer should take ideally 16 seconds
across all 10 connections (with RED enabled). With RED not enabled,
the throughput across the bottleneck bandwidth would be greatly reduced
(generally 20-40%) and the TCP Transfer time would be proportionally
longer then the ideal transfer time.
Additionally, the TCP Transfer Efficiency metric is useful, since
non-RED implementations will exhibit a lower TCP Tranfer Efficiency
than RED implementations.
4. Acknowledgements 4. Acknowledgements
The author would like to thank Gilles Forget, Loki Jorgenson, The author would like to thank Gilles Forget, Loki Jorgenson,
and Reinhard Schrage for technical review and contributions to this and Reinhard Schrage for technical review and contributions to this
draft-03 memo. draft-03 memo.
Also thanks to Matt Mathis and Matt Zekauskas for many good comments Also thanks to Matt Mathis and Matt Zekauskas for many good comments
through email exchange and for pointing us to great sources of through email exchange and for pointing us to great sources of
information pertaining to past works in the TCP capacity area. information pertaining to past works in the TCP capacity area.
skipping to change at page 17, line 29 skipping to change at page 20, line 14
Authors' Addresses Authors' Addresses
Barry Constantine Barry Constantine
JDSU, Test and Measurement Division JDSU, Test and Measurement Division
One Milesone Center Court One Milesone Center Court
Germantown, MD 20876-7100 Germantown, MD 20876-7100
USA USA
Phone: +1 240 404 2227 Phone: +1 240 404 2227
Email: barry.constantine@jdsu.com barry.constantine@jdsu.com
Gilles Forget Gilles Forget
Independent Consultant to Bell Canada. Independent Consultant to Bell Canada.
308, rue de Monaco, St-Eustache 308, rue de Monaco, St-Eustache
Qc. CANADA, Postal Code : J7P-4T5 Qc. CANADA, Postal Code : J7P-4T5
Phone: (514) 895-8212 Phone: (514) 895-8212
gilles.forget@sympatico.ca gilles.forget@sympatico.ca
Loki Jorgenson Loki Jorgenson
Apparent Networks nooCore
Phone: (604) 433-2333 ext 105 Phone: (604) 908-5833
ljorgenson@apparentnetworks.com ljorgenson@nooCore.com
Reinhard Schrage Reinhard Schrage
Schrage Consulting Schrage Consulting
Phone: +49 (0) 5137 909540 Phone: +49 (0) 5137 909540
reinhard@schrageconsult.com reinhard@schrageconsult.com
 End of changes. 55 change blocks. 
225 lines changed or deleted 343 lines changed or added

This html diff was produced by rfcdiff 1.38. The latest version is available from http://tools.ietf.org/tools/rfcdiff/