draft-ietf-ippm-reporting-metrics-00.txt   draft-ietf-ippm-reporting-metrics-01.txt 
Network Working Group A. Morton Network Working Group A. Morton
Internet-Draft G. Ramachandran Internet-Draft G. Ramachandran
Intended status: Informational G. Maguluri Intended status: Informational G. Maguluri
Expires: April 22, 2010 AT&T Labs Expires: September 6, 2010 AT&T Labs
October 19, 2009 March 5, 2010
Reporting Metrics: Different Points of View Reporting Metrics: Different Points of View
draft-ietf-ippm-reporting-metrics-00 draft-ietf-ippm-reporting-metrics-01
Abstract
Consumers of IP network performance metrics have many different uses
in mind. This memo categorizes the different audience points of
view. It describes how the categories affect the selection of metric
parameters and options when seeking info that serves their needs.
The memo then proceeds to discuss "long-term" reporting
considerations (e.g, days, weeks or months, as opposed to 10
seconds).
Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. This document may contain material provisions of BCP 78 and BCP 79.
from IETF Documents or IETF Contributions published or made publicly
available before November 10, 2008. The person(s) controlling the
copyright in some of this material may not have granted the IETF
Trust the right to allow modifications of such material outside the
IETF Standards Process. Without obtaining an adequate license from
the person(s) controlling the copyright in such materials, this
document may not be modified outside the IETF Standards Process, and
derivative works of it may not be created outside the IETF Standards
Process, except to format it for publication as an RFC or to
translate it into languages other than English.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 22, 2010. This Internet-Draft will expire on September 6, 2010.
Copyright Notice Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of Provisions Relating to IETF Documents
publication of this document (http://trustee.ietf.org/license-info). (http://trustee.ietf.org/license-info) in effect on the date of
Please review these documents carefully, as they describe your rights publication of this document. Please review these documents
and restrictions with respect to this document. carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
Abstract include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
Consumers of IP network performance metrics have many different uses described in the BSD License.
in mind. This memo categorizes the different audience points of
view. It describes how the categories affect the selection of metric
parameters and options when seeking info that serves their needs.
The memo then proceeds to discuss "long-term" reporting
considerations (e.g, days, weeks or months, as opposed to 10
seconds).
Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", This document may contain material from IETF Documents or IETF
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this Contributions published or made publicly available before November
document are to be interpreted as described in RFC 2119 [RFC2119]. 10, 2008. The person(s) controlling the copyright in some of this
material may not have granted the IETF Trust the right to allow
modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4 2. Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . 4
3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 5 3. Effect of POV on the Loss Metric . . . . . . . . . . . . . . . 5
3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 5 3.1. Loss Threshold . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1. Network Characterization . . . . . . . . . . . . . . . 5 3.1.1. Network Characterization . . . . . . . . . . . . . . . 5
3.1.2. Application Performance . . . . . . . . . . . . . . . 7 3.1.2. Application Performance . . . . . . . . . . . . . . . 7
3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 7 3.2. Errored Packet Designation . . . . . . . . . . . . . . . . 7
3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 7 3.3. Causes of Lost Packets . . . . . . . . . . . . . . . . . . 7
3.4. Summary for Loss . . . . . . . . . . . . . . . . . . . . . 8 3.4. Summary for Loss . . . . . . . . . . . . . . . . . . . . . 8
4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 8 4. Effect of POV on the Delay Metric . . . . . . . . . . . . . . 8
4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 8 4.1. Treatment of Lost Packets . . . . . . . . . . . . . . . . 8
4.1.1. Application Performance . . . . . . . . . . . . . . . 8 4.1.1. Application Performance . . . . . . . . . . . . . . . 9
4.1.2. Network Characterization . . . . . . . . . . . . . . . 9 4.1.2. Network Characterization . . . . . . . . . . . . . . . 9
4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 10 4.1.3. Delay Variation . . . . . . . . . . . . . . . . . . . 10
4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 11 4.1.4. Reordering . . . . . . . . . . . . . . . . . . . . . . 11
4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 11 4.2. Preferred Statistics . . . . . . . . . . . . . . . . . . . 11
4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 12 4.3. Summary for Delay . . . . . . . . . . . . . . . . . . . . 12
5. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 12 5. Effect of POV on Raw Capacity Metrics . . . . . . . . . . . . 12
5.1. Test Stream Characteristics . . . . . . . . . . . . . . . 12 5.1. Type-P Parameter . . . . . . . . . . . . . . . . . . . . . 12
5.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 12 5.2. a priori Factors . . . . . . . . . . . . . . . . . . . . . 13
6. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 13 5.3. IP-layer Capacity . . . . . . . . . . . . . . . . . . . . 13
6.1. Overview of Metric Statistics . . . . . . . . . . . . . . 13 5.4. IP-layer Utilization . . . . . . . . . . . . . . . . . . . 13
6.2. Long-Term Reporting Considerations . . . . . . . . . . . . 14 5.5. IP-layer Available Capacity . . . . . . . . . . . . . . . 14
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 5.6. Variability in Utilization and Avail. Capacity . . . . . . 15
8. Security Considerations . . . . . . . . . . . . . . . . . . . 15 6. Test Streams and Sample Size . . . . . . . . . . . . . . . . . 15
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15 6.1. Test Stream Characteristics . . . . . . . . . . . . . . . 15
10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.2. Sample Size . . . . . . . . . . . . . . . . . . . . . . . 15
10.1. Normative References . . . . . . . . . . . . . . . . . . . 16 7. Reporting Results . . . . . . . . . . . . . . . . . . . . . . 16
10.2. Informative References . . . . . . . . . . . . . . . . . . 16 7.1. Overview of Metric Statistics . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 7.2. Long-Term Reporting Considerations . . . . . . . . . . . . 17
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18
9. Security Considerations . . . . . . . . . . . . . . . . . . . 18
10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 18
11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19
11.1. Normative References . . . . . . . . . . . . . . . . . . . 19
11.2. Informative References . . . . . . . . . . . . . . . . . . 19
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20
1. Introduction 1. Introduction
When designing measurements of IP networks and presenting the When designing measurements of IP networks and presenting the
results, knowledge of the audience is a key consideration. To results, knowledge of the audience is a key consideration. To
present a useful and relevant portrait of network conditions, one present a useful and relevant portrait of network conditions, one
must answer the following question: must answer the following question:
"How will the results be used?" "How will the results be used?"
skipping to change at page 4, line 44 skipping to change at page 4, line 44
The IPPM framework [RFC2330] and other RFCs describing IPPM metrics The IPPM framework [RFC2330] and other RFCs describing IPPM metrics
provide a background for this memo. provide a background for this memo.
2. Purpose and Scope 2. Purpose and Scope
The purpose of this memo is to clearly delineate two points-of-view The purpose of this memo is to clearly delineate two points-of-view
(POV) for using measurements, and describe their effects on the test (POV) for using measurements, and describe their effects on the test
design, including the selection of metric parameters and reporting design, including the selection of metric parameters and reporting
the results. the results.
The current scope of this memo is primarily limited to design and The current scope of this memo primarily covers the design and
reporting of the loss and delay metrics [RFC2680] [RFC2679], but will reporting of the loss and delay metrics [RFC2680] [RFC2679]. It will
also discuss the delay variation and reordering metrics where also discuss the delay variation and reordering metrics where
applicable. Sampling, or the design of the active packet stream that applicable.
is the basis for the measurements, is also discussed.
With capacity metrics growing in relevance to the industry, the memo
also covers POV and reporting considerations for metrics resulting
from the Bulk Transfer Capacity Framework [RFC3148] and Network
Capacity Definitions [RFC5136]. These memos effectively describe two
different categories of metrics, [RFC3148] with congestion flow-
control and the notion of unique data bits delivered, and [RFC5136]
using a definition of raw capacity without the restrictions of data
uniqueness or congestion-awareness. It might seem at first glance
that each of these metrics has an obvious audience (Raw = Network
Characterization, Restricted = Application Performance), but reality
is more complex and consistent with the overall topic of capacity
measurement and reporting. The Raw and Restricted capacity metrics
will be treated in separate sections, although they share one common
reporting issue: representing variability in capacity metric results.
Sampling, or the design of the active packet stream that is the basis
for the measurements, is also discussed.
3. Effect of POV on the Loss Metric 3. Effect of POV on the Loss Metric
This section describes the ways in which the Loss metric can be tuned This section describes the ways in which the Loss metric can be tuned
to reflect the preferences of the two audience categories, or to reflect the preferences of the two audience categories, or
different POV. The waiting time to declare a packet lost, or loss different POV. The waiting time to declare a packet lost, or loss
threshold is one area where there would appear to be a difference, threshold is one area where there would appear to be a difference,
but the ability to post-process the results may resolve it. but the ability to post-process the results may resolve it.
3.1. Loss Threshold 3.1. Loss Threshold
skipping to change at page 12, line 22 skipping to change at page 12, line 22
2. straightforward network characterization without double-counting 2. straightforward network characterization without double-counting
defects, and defects, and
3. consistency with Delay variation and Reordering metric 3. consistency with Delay variation and Reordering metric
definitions, definitions,
the most efficient practice is to distinguish between truly lost and the most efficient practice is to distinguish between truly lost and
delayed packets with a sufficiently long waiting time, and to delayed packets with a sufficiently long waiting time, and to
designate the delay of non-arriving packets as undefined. designate the delay of non-arriving packets as undefined.
5. Test Streams and Sample Size 5. Effect of POV on Raw Capacity Metrics
This section describes the ways that raw capacity metrics can be
tuned to reflect the preferences of the two audiences, or different
Points-of-View (POV). Raw capacity refers to the metrics defined in
[RFC5136] which do not include restrictions such as data uniqueness
or flow-control response to congestion.
In summary, the metrics considered are IP-layer Capacity, Utilization
(or used capacity), and Available Capacity, for individual links and
complete paths. These three metrics form a triad: knowing one metric
constrains the other two (within their allowed range), and knowing
two determines the third. The link metrics have another key aspect
in common: they are single-measurement-point metrics at the egress of
a link. The path Capacity and Available Capacity are derived by
examining the set of single-point link measurements and taking the
minimum value.
5.1. Type-P Parameter
The concept of "packets of type-P" is defined in [RFC2330]. The
type-P categorization has critical relevance in all forms of capacity
measurement and reporting. The ability to categorize packets based
on header fields for assignment to different queues and scheduling
mechanisms is now common place. When un-used resources are shared
across queues, the conditions in all packet categories will affect
capacity and related measurements. This is one source of variability
in the results that all audiences would prefer to see reported in a
useful and easily understood way.
Type-P in OWAMP and TWAMP is essentially confined to the Diffserv
Codepoint [ref]. DSCP is the most common qualifier for type-P.
Each audience will have a set of type-P qualifications and value
combinations that are of interest. Measurements and reports SHOULD
have the flexibility to per-type and aggregate performance.
5.2. a priori Factors
The audience for Network Characterization may have detailed
information about each link that comprises a complete path (due to
ownership, for example), or some of the links in the path but not
others, or none of the links.
There are cases where the measurement audience only has information
on one of the links (the local access link), and wishes to measure
one or more of the raw capacity metrics. This scenario is quite
common, and has spawned a substantial number of experimental
measurement methods [ref to CAIDA survey page, etc.]. Many of these
methods respect that their users want a result fairly quickly and in
a one-trial. Thus, the measurement interval is kept short (a few
seconds to a minute).
5.3. IP-layer Capacity
For links, this metric's theoretical maximum value can be determined
from the physical layer bit rate and the bit rate reduction due to
the layers between the physical layer and IP. When measured, this
metric takes additional factors into account, such as the ability of
the sending device to process and forward traffic under various
conditions. For example, the arrival of routing updates may spawn
high priority processes that reduce the sending rate temporarily.
Thus, the measured capacity of a link will be variable, and the
maximum capacity observed applies to a specific time, time interval,
and other relevant circumstances.
For paths composed of a series of links, it is easy to see how the
sources of variability for the results grow with each link in the
path. Results variability will be discussed in more detail below.
5.4. IP-layer Utilization
The ideal metric definition of Link Utilization [RFC5136] is based on
the actual usage (bits successfully received during a time interval)
and the Maximum Capacity for the same interval.
In practice, Link Utilization can be calculated by counting the IP-
layer (or other layer) octets received over a time interval and
dividing by the theoretical maximum of octets that could have been
delivered in the same interval. A commonly used time interval is 5
minutes, and this interval has been sufficient to support network
operations and design for some time. 5 minutes is somewhat long
compared with the expected download time for web pages, but short
with respect to large file transfers and TV program viewing. It is
fair to say that considerable variability is concealed by reporting a
single (average) Utilization value for each 5 minute interval. Some
performance management systems have begun to make 1 minute averages
available.
There is also a limit on the smallest useful measurement interval.
Intervals on the order of the serialization time for a single Maximum
Transmission Unit (MTU) packet will observe on/off behavior and
report 100% or 0%. The smallest interval needs to be some multiple
of MTU serialization time for averaging to be effective.
5.5. IP-layer Available Capacity
The Available Capacity of a link can be calculated using the Capacity
and Utilization metrics.
When Available capacity of a link or path is estimated through some
measurement technique, the following parameters SHOULD be reported:
o Name and reference to the exact method of measurement
o IP packet length, octets (including IP header)
o Maximum Capacity that can be assessed in the measurement
configuration
o The time a duration of the measurement
o All other parameters specific to the measurement method
Many methods of Available capacity measurement have a maximum
capacity that they can measure, and this maximum may be less than the
actual Available capacity of the link or path. Therefore, it is
important to know the capacity value beyond which there will be no
measured improvement.
The Application Design audience may have a target capacity value and
simply wish to assess whether there is sufficient Available Capacity.
This case simplifies measurement of link and path capacity to some
degree, as long as the measurable maximum exceeds the target
capacity.
5.6. Variability in Utilization and Avail. Capacity
As with most metrics and measurements, assessing the consistency or
variability in the results gives a the user an intuitive feel for the
degree (or confidence) that any one value is representative of other
results, or the underlying distribution from which these singleton
measurements have come.
Two questions are raised here for further discussion:
What ways can Utilization be measured and summarized to describe the
potential variability in a useful way?
How can the variability in Available Capacity estimates be reported,
so that the confidence in the results is also conveyed?
6. Test Streams and Sample Size
This section discusses two key aspects of measurement that are This section discusses two key aspects of measurement that are
sometimes omitted from the report: the description of the test stream sometimes omitted from the report: the description of the test stream
on which the measurements are based, and the sample size. on which the measurements are based, and the sample size.
5.1. Test Stream Characteristics 6.1. Test Stream Characteristics
Network Characterization has traditionally used Poisson-distributed Network Characterization has traditionally used Poisson-distributed
inter-packet spacing, as this provides an unbiased sample. The inter-packet spacing, as this provides an unbiased sample. The
average inter-packet spacing may be selected to allow observation of average inter-packet spacing may be selected to allow observation of
specific network phenomena. Other test streams are designed to specific network phenomena. Other test streams are designed to
sample some property of the network, such as the presence of sample some property of the network, such as the presence of
congestion, link bandwidth, or packet reordering. congestion, link bandwidth, or packet reordering.
If measuring a network in order to make inferences about applications If measuring a network in order to make inferences about applications
or receiver performance, then there are usually efficiencies derived or receiver performance, then there are usually efficiencies derived
from a test stream that has similar characteristics to the sender. from a test stream that has similar characteristics to the sender.
In some cases, it is essential to synthesize the sender stream, as In some cases, it is essential to synthesize the sender stream, as
with Bulk Transfer Capacity estimates. In other cases, it may be with Bulk Transfer Capacity estimates. In other cases, it may be
sufficient to sample with a "known bias", e.g., a Periodic stream to sufficient to sample with a "known bias", e.g., a Periodic stream to
estimate real-time application performance. estimate real-time application performance.
5.2. Sample Size 6.2. Sample Size
Sample size is directly related to the accuracy of the results, and Sample size is directly related to the accuracy of the results, and
plays a critical role in the report. Even if only the sample size plays a critical role in the report. Even if only the sample size
(in terms of number of packets) is given for each value or summary (in terms of number of packets) is given for each value or summary
statistic, it imparts a notion of the confidence in the result. statistic, it imparts a notion of the confidence in the result.
In practice, the sample size will be selected taking both statistical In practice, the sample size will be selected taking both statistical
and practical factors into account. Among these factors are: and practical factors into account. Among these factors are:
1. The estimated variability of the quantity being measured 1. The estimated variability of the quantity being measured
skipping to change at page 13, line 26 skipping to change at page 16, line 24
4. etc. 4. etc.
A sample size may sometimes be referred to as "large". This is a A sample size may sometimes be referred to as "large". This is a
relative, and qualitative term. It is preferable to describe what relative, and qualitative term. It is preferable to describe what
one is attempting to achieve with their sample. For example, stating one is attempting to achieve with their sample. For example, stating
an implication may be helpful: this sample is large enough such that an implication may be helpful: this sample is large enough such that
a single outlying value at ten times the "typical" sample mean (the a single outlying value at ten times the "typical" sample mean (the
mean without the outlying value) would influence the mean by no more mean without the outlying value) would influence the mean by no more
than X. than X.
6. Reporting Results 7. Reporting Results
This section gives an overview of recommendations, followed by This section gives an overview of recommendations, followed by
additional considerations for reporting results in the "long-term". additional considerations for reporting results in the "long-term".
6.1. Overview of Metric Statistics 7.1. Overview of Metric Statistics
This section gives an overview of reporting recommendations for the This section gives an overview of reporting recommendations for the
loss, delay, and delay variation metrics based on the discussion and loss, delay, and delay variation metrics based on the discussion and
conclusions of the preceding sections. conclusions of the preceding sections.
The minimal report on measurements MUST include both Loss and Delay The minimal report on measurements MUST include both Loss and Delay
Metrics. Metrics.
For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient For Packet Loss, the loss ratio defined in [RFC2680] is a sufficient
starting point, especially the guidance for setting the loss starting point, especially the guidance for setting the loss
skipping to change at page 14, line 23 skipping to change at page 17, line 22
distributions are not truncated. distributions are not truncated.
For Packet Delay Variation (PDV), the minimum delay of the For Packet Delay Variation (PDV), the minimum delay of the
conditional distribution should be used as the reference delay for conditional distribution should be used as the reference delay for
computing PDV according to [Y.1540] or [RFC3393]. A useful value to computing PDV according to [Y.1540] or [RFC3393]. A useful value to
report is a pseudo range of delay variation based on calculating the report is a pseudo range of delay variation based on calculating the
difference between a high percentile of delay and the minimum delay. difference between a high percentile of delay and the minimum delay.
For example, the 99.9%-ile minus the minimum will give a value that For example, the 99.9%-ile minus the minimum will give a value that
can be compared with objectives in [Y.1541]. can be compared with objectives in [Y.1541].
6.2. Long-Term Reporting Considerations 7.2. Long-Term Reporting Considerations
[I-D.ietf-ippm-reporting] describes methods to conduct measurements [I-D.ietf-ippm-reporting] describes methods to conduct measurements
and report the results on a near-immediate time scale (10 seconds, and report the results on a near-immediate time scale (10 seconds,
which we consider to be "short-term"). which we consider to be "short-term").
Measurement intervals and reporting intervals need not be the same Measurement intervals and reporting intervals need not be the same
length. Sometimes, the user is only concerned with the performance length. Sometimes, the user is only concerned with the performance
levels achieved over a relatively long interval of time (e.g, days, levels achieved over a relatively long interval of time (e.g, days,
weeks, or months, as opposed to 10 seconds). However, there can be weeks, or months, as opposed to 10 seconds). However, there can be
risks involved with running a measurement continuously over a long risks involved with running a measurement continuously over a long
skipping to change at page 15, line 32 skipping to change at page 18, line 32
and the results of each measurement interval are compared with the and the results of each measurement interval are compared with the
objective. Every measurement interval where the results meet the objective. Every measurement interval where the results meet the
objective contribute to the fraction of time with performance as objective contribute to the fraction of time with performance as
specified. When the reporting interval contains many measurement specified. When the reporting interval contains many measurement
intervals it is possible to present the results as "metric A was less intervals it is possible to present the results as "metric A was less
than or equal to objective X during Y% of time. than or equal to objective X during Y% of time.
NOTE that numerical thresholds are not set in IETF performance work NOTE that numerical thresholds are not set in IETF performance work
and are explicitly excluded from the IPPM charter. and are explicitly excluded from the IPPM charter.
7. IANA Considerations 8. IANA Considerations
This document makes no request of IANA. This document makes no request of IANA.
Note to RFC Editor: this section may be removed on publication as an Note to RFC Editor: this section may be removed on publication as an
RFC. RFC.
8. Security Considerations 9. Security Considerations
The security considerations that apply to any active measurement of The security considerations that apply to any active measurement of
live networks are relevant here as well. See [RFC4656]. live networks are relevant here as well. See [RFC4656].
9. Acknowledgements 10. Acknowledgements
The authors would like to thank Phil Chimento for his suggestion to The authors would like to thank Phil Chimento for his suggestion to
employ conditional distributions for Delay, and Steve Konish Jr. for employ conditional distributions for Delay, and Steve Konish Jr. for
his careful review and suggestions. his careful review and suggestions.
10. References 11. References
10.1. Normative References 11.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
May 1998. May 1998.
[RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Delay Metric for IPPM", RFC 2679, September 1999. Delay Metric for IPPM", RFC 2679, September 1999.
[RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Packet Loss Metric for IPPM", RFC 2680, September 1999. Packet Loss Metric for IPPM", RFC 2680, September 1999.
[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148,
July 2001.
[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
Metric for IP Performance Metrics (IPPM)", RFC 3393, Metric for IP Performance Metrics (IPPM)", RFC 3393,
November 2002. November 2002.
[RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
Zekauskas, "A One-way Active Measurement Protocol Zekauskas, "A One-way Active Measurement Protocol
(OWAMP)", RFC 4656, September 2006. (OWAMP)", RFC 4656, September 2006.
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
S., and J. Perser, "Packet Reordering Metrics", RFC 4737, S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
November 2006. November 2006.
10.2. Informative References [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity",
RFC 5136, February 2008.
11.2. Informative References
[Casner] "A Fine-Grained View of High Performance Networking, NANOG [Casner] "A Fine-Grained View of High Performance Networking, NANOG
22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html", May
20-22 2001. 20-22 2001.
[Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone, [Cia03] "Standardized Active Measurements on a Tier 1 IP Backbone,
IEEE Communications Mag., pp 90-97.", June 2003. IEEE Communications Mag., pp 90-97.", June 2003.
[I-D.ietf-ippm-framework-compagg] [I-D.ietf-ippm-framework-compagg]
Morton, A., "Framework for Metric Composition", Morton, A., "Framework for Metric Composition",
draft-ietf-ippm-framework-compagg-08 (work in progress), draft-ietf-ippm-framework-compagg-09 (work in progress),
June 2009. December 2009.
[I-D.ietf-ippm-reporting] [I-D.ietf-ippm-reporting]
Shalunov, S. and M. Swany, "Reporting IP Performance Shalunov, S. and M. Swany, "Reporting IP Performance
Metrics to Users", draft-ietf-ippm-reporting-04 (work in Metrics to Users", draft-ietf-ippm-reporting-04 (work in
progress), July 2009. progress), July 2009.
[Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data [Y.1540] ITU-T Recommendation Y.1540, "Internet protocol data
communication service - IP packet transfer and communication service - IP packet transfer and
availability performance parameters", December 2002. availability performance parameters", December 2002.
 End of changes. 25 change blocks. 
67 lines changed or deleted 246 lines changed or added

This html diff was produced by rfcdiff 1.38. The latest version is available from http://tools.ietf.org/tools/rfcdiff/