draft-ietf-ippm-tcp-throughput-tm-02.txt   draft-ietf-ippm-tcp-throughput-tm-03.txt 
Network Working Group B. Constantine Network Working Group B. Constantine
Internet-Draft JDSU Internet-Draft JDSU
Intended status: Informational G. Forget Intended status: Informational G. Forget
Expires: November 18, 2010 Bell Canada (Ext. Consultant) Expires: December 18, 2010 Bell Canada (Ext. Consultant)
L. Jorgenson L. Jorgenson
Apparent Networks Apparent Networks
Reinhard Schrage Reinhard Schrage
Schrage Consulting Schrage Consulting
May 18, 2010 June 8, 2010
TCP Throughput Testing Methodology TCP Throughput Testing Methodology
draft-ietf-ippm-tcp-throughput-tm-02.txt draft-ietf-ippm-tcp-throughput-tm-03.txt
Abstract Abstract
This memo describes a methodology for measuring sustained TCP This memo describes a methodology for measuring sustained TCP
throughput performance in an end-to-end managed network environment. throughput performance in an end-to-end managed network environment.
This memo is intended to provide a practical approach to help users This memo is intended to provide a practical approach to help users
validate the TCP layer performance of a managed network, which should validate the TCP layer performance of a managed network, which should
provide a better indication of end-user application level experience. provide a better indication of end-user application level experience.
In the methodology, various TCP and network parameters are identified In the methodology, various TCP and network parameters are identified
that should be tested as part of the network verification at the TCP that should be tested as part of the network verification at the TCP
layer. layer.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Creation date May 18, 2010. Drafts. Creation date June 8, 2010.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on November 18, 2010. This Internet-Draft will expire on December 18, 2010.
Copyright Notice Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 25 skipping to change at page 2, line 25
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the BSD License. described in the BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Goals of this Methodology. . . . . . . . . . . . . . . . . . . 4 2. Goals of this Methodology. . . . . . . . . . . . . . . . . . . 4
2.1 TCP Equilibrium State Throughput . . . . . . . . . . . . . 5 2.1 TCP Equilibrium State Throughput . . . . . . . . . . . . . 5
3. TCP Throughput Testing Methodology . . . . . . . . . . . . . . 6 2.2 Metric for TCP Throughput Tests . . . . . . . . . . . . . 6
3.1 Determine Network Path MTU . . . . . . . . . . . . . . . . 7 3. TCP Throughput Testing Methodology . . . . . . . . . . . . . . 7
3.2. Baseline Round-trip Delay and Bandwidth. . . . . . . . . . 8 3.1 Determine Network Path MTU . . . . . . . . . . . . . . . . 8
3.2.1 Techniques to Measure Round Trip Time . . . . . . . . 9 3.2. Baseline Round-trip Delay and Bandwidth. . . . . . . . . . 9
3.2.1 Techniques to Measure Round Trip Time . . . . . . . . 10
3.2.2 Techniques to Measure End-end Bandwidth . . . . . . . 10 3.2.2 Techniques to Measure End-end Bandwidth . . . . . . . 10
3.3. Single TCP Connection Throughput Tests . . . . . . . . . . 10 3.3. Single TCP Connection Throughput Tests . . . . . . . . . . 11
3.3.1 Interpretation of the Single Connection TCP 3.3.1 Interpretation of the Single Connection TCP
Throughput Results . . . . . . . . . . . . . . . . . . 14 Throughput Results . . . . . . . . . . . . . . . . . . 14
3.4. TCP MSS Throughput Testing . . . . . . . . . . . . . . . . 14 3.4. Traffic Management Testing . . . . . . . . . . . . . . . . 14
3.4.1 MSS Size Testing Method. . . . . . . . . . . . . . . 14 3.4.1 Multiple TCP Connections - below Link Capacity . . . . 14
3.4.2 Interpretation of TCP MSS Throughput Results. . . . . 15 3.4.2 Multiple TCP Connections - over Link Capacity. . . . . 15
3.5. Multiple TCP Connection Throughput Tests. . . . . . . . . . 16 3.4.3 Interpretation of Multiple TCP Connection Results. . . 16
3.5.1 Multiple TCP Connections - below Link Capacity . . . . 16
3.5.2 Multiple TCP Connections - over Link Capacity. . . . . 17
3.5.3 Interpretation of Multiple TCP Connection Results. . . 17
4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18 4. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16
5. References . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
1. Introduction 1. Introduction
Even though RFC2544 was meant to benchmark network equipment and Even though RFC2544 was meant to benchmark network equipment and
used by network equipment manufacturers (NEMs), network providers used by network equipment manufacturers (NEMs), network providers
have used it to benchmark operational networks in order to have used it to benchmark operational networks in order to
verify SLAs (Service Level Agreements) before turning on a service verify SLAs (Service Level Agreements) before turning on a service
to their business customers. Testing an operational network prior to to their business customers. Testing an operational network prior to
customer activation is referred to as "turn-up" testing and the SLA customer activation is referred to as "turn-up" testing and the SLA
is generally Layer 2/3 packet throughput, delay, loss and is generally Layer 2/3 packet throughput, delay, loss and
skipping to change at page 6, line 5 skipping to change at page 6, line 5
|/___________________________________________________________ |/___________________________________________________________
Time Time
This TCP methodology provides guidelines to measure the equilibrium This TCP methodology provides guidelines to measure the equilibrium
throughput which refers to the maximum sustained rate obtained by throughput which refers to the maximum sustained rate obtained by
congestion avoidance before packet loss conditions occur (which would congestion avoidance before packet loss conditions occur (which would
cause the state change from congestion avoidance to a retransmission cause the state change from congestion avoidance to a retransmission
phase). All maximum achievable throughputs specified in Section 3 are phase). All maximum achievable throughputs specified in Section 3 are
with respect to this Equilibrium state. with respect to this Equilibrium state.
2.2 Metrics for TCP Throughput Tests
This draft focuses on a TCP throughtput methodology and also
provides two basic metrics to compare results of various throughput
tests. It is recognized that the complexity and unpredictability of
TCP makes it impossible to develop a set of metrics that account for
the myriad of variables (i.e. RTT variation, loss conditions, TCP
implementation, etc.). However, these two basic metrics are useful to
compare network traffic management techniques, especially in section
3.4 of this document (Traffic Management Tests).
The TCP Efficiency metric is the percentage of bytes that were not
retransmitted and is defined as:
Transmitted Bytes - Retransmitted Bytes
--------------------------------------- x 100
Transmitted Bytes
This metric is easy to understand and provides a comparative measure
between various QoS mechanisms such as traffic management, congestion
avoidance, and also various TCP implementations (i.e. Reno, Vegas,
etc.).
The second measure is also basic and is the TCP Transfer Time,
which is simply the time is takes to transfer a block of data across
simultaneous TCP connections. The concept is useful to benchmark
traffic management tests, where multiple connections are required and
it simplifies comparing results of different approaches. An
example would be the bulk transfer of 10 MB upon 8 separate TCP
connections (each connection uploading 10 MB). Each connection may
achieve different throughputs during a test and the overall throughput
rate is not always easy to determine (especially as the number of
connections increases). But by defining the Transfer Time as that of
the successful transfer of 10MB over all 8 connections, the single
transfer time metric is very useful means to rate various traffic
management techniques (i.e. FiFO, WFQ queuing, WRED, etc.).
3. TCP Throughput Testing Methodology 3. TCP Throughput Testing Methodology
This section summarizes the specific test methodology to achieve the This section summarizes the specific test methodology to achieve the
goals listed in Section 2. goals listed in Section 2.
As stated in Section 1, it is considered best practice to verify As stated in Section 1, it is considered best practice to verify
the integrity of the network by conducting Layer2/3 stress tests the integrity of the network by conducting Layer2/3 stress tests
such as RFC2544 or other methods of network stress tests. If the such as RFC2544 or other methods of network stress tests. If the
network is not performing properly in terms of packet loss, jitter, network is not performing properly in terms of packet loss, jitter,
etc. then the TCP layer testing will not be meaningful since the etc. then the TCP layer testing will not be meaningful since the
skipping to change at page 6, line 35 skipping to change at page 7, line 22
2. Baseline Round-trip Delay and Bandwidth. These measurements provide 2. Baseline Round-trip Delay and Bandwidth. These measurements provide
estimates of the ideal TCP window size, which will be used in estimates of the ideal TCP window size, which will be used in
subsequent test steps. subsequent test steps.
3. Single TCP Connection Throughput Tests. With baseline measurements 3. Single TCP Connection Throughput Tests. With baseline measurements
of round trip delay and bandwidth, a series of single connection TCP of round trip delay and bandwidth, a series of single connection TCP
throughput tests can be conducted to baseline the performance of the throughput tests can be conducted to baseline the performance of the
network against expectations. network against expectations.
4. TCP MSS Throughput Testing. By varying the MSS size of the TCP 4. Traffic Management Tests. Various traffic management and queuing
connection, the ability of the network to sustain expected TCP techniques are tested in this step, using multiple TCP connections.
throughput can be verified. Multiple connection testing can verify that the network is configured
properly for traffic shaping versus policing, various queuing
5. Multiple TCP Connection Throughput Tests. Single connection TCP implementations, and RED.
testing is a useful first step to measure expected versus actual TCP
performance. The multiple connection test more closely emulates
customer traffic, which comprise many TCP connections over a network
link.
Important to note are some of the key characteristics and Important to note are some of the key characteristics and
considerations for the TCP test instrument. The test host may be a considerations for the TCP test instrument. The test host may be a
standard computer or dedicated communications test instrument standard computer or dedicated communications test instrument
and these TCP test hosts be capable of emulating both a client and a and these TCP test hosts be capable of emulating both a client and a
server. As a general rule of thumb, testing TCP throughput at rates server. As a general rule of thumb, testing TCP throughput at rates
greater than 250-500 Mbit/sec generally requires high performance greater than 250-500 Mbit/sec generally requires high performance
server hardware or dedicated hardware based test tools. server hardware or dedicated hardware based test tools.
Whether the TCP test host is a standard computer or dedicated test Whether the TCP test host is a standard computer or dedicated test
skipping to change at page 7, line 22 skipping to change at page 8, line 7
implementation may be customized or tuned to run in higher implementation may be customized or tuned to run in higher
performance hardware performance hardware
- Most importantly, the TCP test host must be capable of generating - Most importantly, the TCP test host must be capable of generating
and receiving stateful TCP test traffic at the full link speed of the and receiving stateful TCP test traffic at the full link speed of the
network under test. This requirement is very serious and may require network under test. This requirement is very serious and may require
custom test equipment, especially on 1 GigE and 10 GigE networks. custom test equipment, especially on 1 GigE and 10 GigE networks.
3.1. Determine Network Path MTU 3.1. Determine Network Path MTU
TCP implementations should use Path MTU Discovery techniques (PMTUD), TCP implementations should use Path MTU Discovery techniques (PMTUD).
but this technique does not always prove reliable in real world PMTUD relies on ICMP 'need to frag' messages to learn the path MTU.
situations. Since PMTUD relies on ICMP messages (to inform the host When a device has a packet to send which has the Don't Fragment (DF)
that unfragmented transmission cannot occur), it's not always bit in the IP header set and the packet is larger than the Maximum
reliable since many network managers completely disable ICMP. Transmission Unit (MTU) of the next hop link, the packet is dropped
and the device sends an ICMP 'need to frag' message back to the host
Increasingly, network providers and enterprises are instituting fixed that originated the packet. The ICMP 'need to frag' message includes
MTU sizes on the hosts to eliminate TCP fragmentation issues. the next hop MTU which PMTUD uses to tune the TCP Maximum Segment
Size (MSS). Unfortunately, because many network managers completely
disable ICMP, this technique does not always prove reliable in real
world situations.
Packetization Layer Path MTU Discovery or PLPMTUD (RFC4821) should Packetization Layer Path MTU Discovery or PLPMTUD (RFC4821) should
be conducted to verify the minimum network path MTU. PLPMTUD can be conducted to verify the minimum network path MTU. PLPMTUD can
be used with or without ICMP. The following sections provide a be used with or without ICMP. The following sections provide a
summary of the PLPMTUD approach and an example using the TCP summary of the PLPMTUD approach and an example using the TCP
protocol. protocol. RFC4821 specifies a search_high and search_low parameter
for the MTU. As specified in RFC4821, a value of 1024 is a generally
RFC4821 specifies a search_high and search_low parameter for the safe value to choose for search_low in modern networks.
MTU. As specified in RFC4821, a value of 1024 is a generally safe
value to choose for search_low in modern networks.
It is important to determine the overhead of the links in the path, It is important to determine the overhead of the links in the path,
and then to select a TCP MSS size corresponding to the Layer 3 MTU. and then to select a TCP MSS size corresponding to the Layer 3 MTU.
For example, if the MTU is 1024 bytes and the TCP/IP headers are 40 For example, if the MTU is 1024 bytes and the TCP/IP headers are 40
bytes, then the MSS would be set to 984 bytes. bytes, then the MSS would be set to 984 bytes.
An example scenario is a network where the actual path MTU is 1240 An example scenario is a network where the actual path MTU is 1240
bytes. The TCP client probe MUST be capable of setting the MSS for bytes. The TCP client probe MUST be capable of setting the MSS for
the probe packets and could start at MSS = 984 (which corresponds the probe packets and could start at MSS = 984 (which corresponds
to an MTU size of 1024 bytes). to an MTU size of 1024 bytes).
skipping to change at page 12, line 33 skipping to change at page 13, line 5
provider world and one of the most common is the "iperf" tool. With provider world and one of the most common is the "iperf" tool. With
this tool, hosts are installed at each end of the network segment; this tool, hosts are installed at each end of the network segment;
one as client and the other as server. The TCP Window size of both one as client and the other as server. The TCP Window size of both
the client and the server can be maunally set and the achieved the client and the server can be maunally set and the achieved
throughput is measured, either uni-directionally or bi-directionally. throughput is measured, either uni-directionally or bi-directionally.
For higher BDP situations in lossy networks (long fat networks or For higher BDP situations in lossy networks (long fat networks or
satellite links, etc.), TCP options such as Selective Acknowledgment satellite links, etc.), TCP options such as Selective Acknowledgment
should be considered and also become part of the window should be considered and also become part of the window
size / throughput characterization. size / throughput characterization.
The following diagram shows the achievable TCP throughput on a T3 with The following diagram shows the achievable TCP throughput on a T3 with
the default Windows2000/XP TCP Window size of 17520 Bytes. the default Windows2000/XP TCP Window size of 17520 Bytes.
45| 45|
| |
40| 40|
TCP | TCP |
Throughput 35| Throughput 35|
in Mbps | in Mbps |
30| 30|
| |
25| 25|
skipping to change at page 13, line 5 skipping to change at page 13, line 29
| |
15| _______ 14.48M 15| _______ 14.48M
| | | | | |
10| | | +-----+ 9.65M 10| | | +-----+ 9.65M
| | | | | _______ 5.79M | | | | | _______ 5.79M
5| | | | | | | 5| | | | | | |
|_________+_____+_________+_____+________+____ +___________ |_________+_____+_________+_____+________+____ +___________
10 15 25 10 15 25
RTT in milliseconds RTT in milliseconds
The following diagram shows the achievable TCP throughput on a 25ms T3 The following diagram shows the achievable TCP throughput on a 25ms T3
when the TCP Window size is increased and with the RFC1323 TCP Window when the TCP Window size is increased and with the RFC1323 TCP Window
scaling option. scaling option.
45| 45|
| +-----+42.47M | +-----+42.47M
40| | | 40| | |
TCP | | | TCP | | |
Throughput 35| | | Throughput 35| | |
in Mbps | | | in Mbps | | |
30| | | 30| | |
| | | | | |
25| | | 25| | |
| ______ 21.23M | | | ______ 21.23M | |
20| | | | | 20| | | | |
| | | | | | | | | |
15| | | | | 15| | | | |
| | | | | | | | | |
10| +----+10.62M | | | | 10| +----+10.62M | | | |
| _______5.31M | | | | | | | _______5.31M | | | | | |
5| | | | | | | | | 5| | | | | | | | |
|__+_____+______+____+___________+____+________+_____+___ |__+_____+______+____+___________+____+________+_____+___
16 32 64 128 16 32 64 128
TCP Window size in Kili Bytes TCP Window size in Kilo Bytes
The single connection TCP throughput test must be run over a The single connection TCP throughput test must be run over a
a long duration and results must be logged at the desired interval. a long duration and results must be logged at the desired interval.
The test must record RTT and TCP retransmissions at each interval. The test must record RTT and TCP retransmissions at each interval.
This correlation of retransmissions and RTT over the course of the This correlation of retransmissions and RTT over the course of the
test will clearly identify which portions of the transfer reached test will clearly identify which portions of the transfer reached
TCP Equilbrium state and to what effect increased RTT (congestive TCP Equilbrium state and to what effect increased RTT (congestive
effects) may have been the cause of reduced equilibrium performance. effects) may have been the cause of reduced equilibrium performance.
skipping to change at page 14, line 19 skipping to change at page 14, line 33
each TCP window size setting. For cases where the sustained TCP each TCP window size setting. For cases where the sustained TCP
throughput does not equal the predicted value, some possible causes throughput does not equal the predicted value, some possible causes
are listed: are listed:
- Network congestion causing packet loss - Network congestion causing packet loss
- Network congestion not causing packet loss, but effectively - Network congestion not causing packet loss, but effectively
increasing the size of the required TCP window during the transfer increasing the size of the required TCP window during the transfer
- Intermediate network devices which actively regenerate the TCP - Intermediate network devices which actively regenerate the TCP
connection and can alter window size, MSS, etc. connection and can alter window size, MSS, etc.
3.4. TCP MSS Throughput Testing 3.4. Traffic Management Tests
This test setup should be conducted as a single TCP connection test.
By varying the MSS size of the TCP connection, the ability of the
network to sustain expected TCP throughput can be verified. This is
similar to frame and packet size techniques within RFC2-2544, which
aim to determine the ability of the routing/switching devices to
handle loads in term of packets/frames per second at various frame
and packet sizes. This test can also further characterize the
performance of a network in the presence of active TCP elements
(proxies, etc.), devices that fragment IP packets, and the actual
end hosts themselves (servers, etc.).
3.4.1 MSS Size Testing Method
The single connection testing listed in Section 3.3 should be
repeated, using the appropriate window size and collecting
throughput measurements per various MSS sizes.
The following are the typical sizes of MSS settings for various
link speeds:
- 256 bytes for very low speed links such as 9.6Kbps (per RFC1144).
- 536 bytes for low speed links (per RFC879) .
- 966 bytes for SLIP high speed (per RFC1055).
- 1380 bytes for IPSec VPN Tunnel testing
- 1452 bytes for PPPoE connectivity (per RFC2516)
- 1460 for Ethernet and Fast Ethernet (per RFC895).
- 8960 byte jumbo frames for GigE
Using the optimum window size determined by conducting steps 3.2 and
3.3, a variety of window sizes should be tested according to the link
speed under test. Using Fast Ethernet with 5 msec RTT as an example,
the optimum TCP window size would be 62.5 kbytes and the recommended
MSS for Fast Ethernet is 1460 bytes.
Link Achievable TCP Throughput (Mbps) for
Speed RTT(ms) MSS=1000 MSS=1260 MSS=1300 MSS=1380 MSS=1420 MSS=1460
----------------------------------------------------------------------
T1 20 | 1.20 1.008 1.040 1.104 1.136 1.168
T1 50 | 1.44 1.411 1.456 1.335 1.363 1.402
T1 100 | 1.44 1.512 1.456 1.435 1.477 1.402
T3 10 | 41.60 42.336 42.640 41.952 40.032 42.048
T3 15 | 42.13 42.336 42.293 42.688 42.411 42.048
T3 25 | 41.92 42.336 42.432 42.394 42.714 42.515
T3(ATM) 10 | 32.44 33.815 34.477 35.482 36.022 36.495
T3(ATM) 15 | 32.44 34.120 34.477 35.820 36.022 36.127
T3(ATM) 25 | 32.44 34.363 34.860 35.684 36.022 36.274
100M 1 | 90.699 89.093 91.970 86.866 89.424 91.982
100M 2 | 92.815 93.226 93.275 88.505 90.973 93.442
100M 5 | 90.699 92.481 92.697 88.245 90.844 93.442
For GigE and 10GigE, Jumbo frames (9000 bytes) are becoming more
common. The following table adds jumbo frames to the possible MSS
values.
Link Achievable TCP Throughput (Mbps) for
Speed RTT(ms) MSS=1260 MSS=1300 MSS=1380 MSS=1420 MSS=1460 MSS=8960
----------------------------------------------------------------------
1Gig 0.1 | 924.812 926.966 882.495 894.240 919.819 713.786
1Gig 0.5 | 924.812 926.966 930.922 932.743 934.467 856.543
1Gig 1.0 | 924.812 926.966 930.922 932.743 934.467 927.922
10Gig 0.05| 9248.125 9269.655 9309.218 9839.790 9344.671 8565.435
10Gig 0.3 | 9248.125 9269.655 9309.218 9839.790 9344.671 9755.079
Each row in the table is a separate test that should be conducted
over a predetermined test interval and the throughput,retransmissions,
and RTT logged during the entire test interval.
3.4.2 Interpretation of TCP MSS Throughput Results
For cases where the predicted TCP throughput does not equal the
predicted throughput predicted for a given MSS, some possible causes
are listed:
- TBD
3.5. Multiple TCP Connection Throughput Tests
After baselining the network under test with a single TCP connection After baselining the network under test with a single TCP connection
(Section 3.3), the nominal capacity of the network has been (Section 3.3), the nominal capacity of the network has been
determined. The capacity measured in section 3.3 may be a capacity determined. The capacity measured in section 3.3 may be a capacity
range and it is reasonable that some level of tuning may have been range and it is reasonable that some level of tuning may have been
required (i.e. router shaping techniques employed, intermediary required (i.e. router shaping techniques employed, intermediary
proxy like devices tuned, etc.). proxy like devices tuned, etc.).
Single connection TCP testing is a useful first step to measure Single connection TCP testing is a useful first step to measure
expected versus actual TCP performance and as a means to diagnose expected versus actual TCP performance and as a means to diagnose
/ tune issues in the network and active elements. However, the / tune issues in the network and active elements. However, the
ultimate goal of this methodology is to more closely emulate customer ultimate goal of this methodology is to more closely emulate customer
traffic, which comprise many TCP connections over a network link. traffic, which comprise many TCP connections over a network link.
This methodology inevitably seeks to provide the framework for
testing stateful TCP connections in concurrence with stateless
traffic streams, and this is described in Section 3.5.
3.5.1 Multiple TCP Connections - below Link Capacity 3.4.1 Multiple TCP Connections - below Link Capacity
First, the ability of the network to carry multiple TCP connections First, the ability of the network to carry multiple TCP connections
to full network capacity should be tested. Prioritization and QoS to full network capacity should be tested. Prioritization and QoS
settings are not considered during this step, since the network settings are not considered during this step, since the network
capacity is not to be exceeded by the test traffic (section 3.5.2 capacity is not to be exceeded by the test traffic (section 3.5.2
covers the over capacity test case). covers the over capacity test case).
For this multiple connection TCP throughput test, the number of For this multiple connection TCP throughput test, the number of
connections will more than likely be limited by the test tool (host connections will more than likely be limited by the test tool (host
vs. dedicated test equipment). As an example, for a GigE link with vs. dedicated test equipment). As an example, for a GigE link with
skipping to change at page 17, line 5 skipping to change at page 15, line 28
For this test step, it should be conducted over a reasonable test For this test step, it should be conducted over a reasonable test
duration and results should be logged per interval such as throughput duration and results should be logged per interval such as throughput
per connection, RTT, and retransmissions. per connection, RTT, and retransmissions.
Since the network is not to be driven into over capacity (by nature Since the network is not to be driven into over capacity (by nature
of the BDP allocated evenly to each connection), this test verifies of the BDP allocated evenly to each connection), this test verifies
the ability of the network to carry multiple TCP connections up to the ability of the network to carry multiple TCP connections up to
the link speed of the network. the link speed of the network.
3.5.2 Multiple TCP Connections - over Link Capacity 3.4.2 Multiple TCP Connections - over Link Capacity
In this step, the network bandwidth is intentionally exceeded with In this step, the network bandwidth is intentionally exceeded with
multiple TCP connections to test expected prioritization and queuing multiple TCP connections to test expected prioritization and queuing
within the network. within the network.
All conditions related to Section 3.3 set-up apply, especially the All conditions related to Section 3.3 set-up apply, especially the
ability of the test hosts to transfer stateful TCP traffic at network ability of the test hosts to transfer stateful TCP traffic at network
line rates. line rates.
Using the same example from Section 3.3, a GigE link with 1 msec Using the same example from Section 3.3, a GigE link with 1 msec
RTT would require a window size of 128 KB to fill the link (with RTT would require a window size of 128 KB to fill the link (with
one TCP connection). Assuming a 16KB window, 8 concurrent one TCP connection). Assuming a 16KB window, 8 concurrent
connections would fill the GigE link capacity and values higher than connections would fill the GigE link capacity and values higher than
8 would over-subscribe the network capacity. The user would select 8 would over-subscribe the network capacity. The user would select
values to over-subscribe the network (i.e. possibly 10 15, 20, etc.) values to over-subscribe the network (i.e. possibly 10 15, 20, etc.)
to conduct experiments to verify proper prioritization and queuing to conduct experiments to verify proper prioritization and queuing
within the network. within the network.
3.5.3 Interpretation of Multiple TCP Connection Test Restults 3.4.3 Interpretation of Multiple TCP Connection Test Restults
Without any prioritization in the network, the over subscribed test Without any prioritization in the network, the over subscribed test
results could assist in the queuing studies. With proper queuing, results could assist in the queuing studies. With proper queuing,
the bandwidth should be shared in a reasonable manner. The author the bandwidth should be shared in a reasonable manner. The author
understands that the term "reasonable" is too wide open, and future understands that the term "reasonable" is too wide open, and future
draft versions of this memo would attempt to quantify this sharing draft versions of this memo would attempt to quantify this sharing
in more tangible terms. It is known that if a network element in more tangible terms. It is known that if a network element
is not set for proper queuing (i.e. FIFO), then an oversubscribed is not set for proper queuing (i.e. FIFO), then an oversubscribed
TCP connection test will generally show a very uneven distribution of TCP connection test will generally show a very uneven distribution of
bandwidth. bandwidth.
With prioritization in the network, different TCP connections can be With prioritization in the network, different TCP connections can be
assigned various QoS settings via the various mechanisms (i.e. per assigned various QoS settings via the various mechanisms (i.e. per
VLAN, DSCP, etc.), and the higher priority connections must be VLAN, DSCP, etc.), and the higher priority connections must be
verified to achieve the expected throughput. verified to achieve the expected throughput.
4. Acknowledgements 4. Acknowledgements
The author would like to thank Gilles Forget, Loki Jorgenson, The author would like to thank Gilles Forget, Loki Jorgenson,
and Reinhard Schrage for technical review and contributions to this and Reinhard Schrage for technical review and contributions to this
draft-00 memo. draft-03 memo.
Also thanks to Matt Mathis and Matt Zekauskas for many good comments Also thanks to Matt Mathis and Matt Zekauskas for many good comments
through email exchange and for pointing me to great sources of through email exchange and for pointing us to great sources of
information pertaining to past works in the TCP capacity area. information pertaining to past works in the TCP capacity area.
5. References 5. References
[RFC2581] Allman, M., Paxson, V., Stevens W., "TCP Congestion [RFC2581] Allman, M., Paxson, V., Stevens W., "TCP Congestion
Control", RFC 2581, May 1999. Control", RFC 2581, June 1999.
[RFC3148] Mathis M., Allman, M., "A Framework for Defining [RFC3148] Mathis M., Allman, M., "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148, July Empirical Bulk Transfer Capacity Metrics", RFC 3148, July
2001. 2001.
[RFC2544] Bradner, S., McQuaid, J., "Benchmarking Methodology for [RFC2544] Bradner, S., McQuaid, J., "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, May 1999 Network Interconnect Devices", RFC 2544, June 1999
[RFC3449] Balakrishnan, H., Padmanabhan, V. N., Fairhurst, G., [RFC3449] Balakrishnan, H., Padmanabhan, V. N., Fairhurst, G.,
Sooriyabandara, M., "TCP Performance Implications of Sooriyabandara, M., "TCP Performance Implications of
Network Path Asymmetry", RFC 3449, December 2002 Network Path Asymmetry", RFC 3449, December 2002
[RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., Babiarz, [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., Babiarz,
J., "A Two-Way Active Measurement Protocol (TWAMP)", J., "A Two-Way Active Measurement Protocol (TWAMP)",
RFC 5357, October 2008 RFC 5357, October 2008
[RFC4821] Mathis, M., Heffner, J., "Packetization Layer Path MTU [RFC4821] Mathis, M., Heffner, J., "Packetization Layer Path MTU
Discovery", RFC 4821, May 2007 Discovery", RFC 4821, June 2007
draft-ietf-ippm-btc-cap-00.txt Allman, M., "A Bulk draft-ietf-ippm-btc-cap-00.txt Allman, M., "A Bulk
Transfer Capacity Methodology for Cooperating Hosts", Transfer Capacity Methodology for Cooperating Hosts",
August 2001 August 2001
[MSMO] The Macroscopic Behavior of the TCP Congestion Avoidance [MSMO] The Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm Mathis, M.,Semke, J, Mahdavi, J, Ott, T Algorithm Mathis, M.,Semke, J, Mahdavi, J, Ott, T
July 1997 SIGCOMM Computer Communication Review, July 1997 SIGCOMM Computer Communication Review,
Volume 27 Issue 3 Volume 27 Issue 3
 End of changes. 26 change blocks. 
137 lines changed or deleted 89 lines changed or added

This html diff was produced by rfcdiff 1.38. The latest version is available from http://tools.ietf.org/tools/rfcdiff/