draft-ietf-rmcat-nada-01.txt   draft-ietf-rmcat-nada-02.txt 
Network Working Group X. Zhu Network Working Group X. Zhu
Internet-Draft R. Pan Internet-Draft R. Pan
Intended status: Experimental M. Ramalho Intended status: Experimental M. Ramalho
Expires: April 11, 2016 S. Mena Expires: September 19, 2016 S. Mena
P. Jones P. Jones
J. Fu J. Fu
Cisco Systems Cisco Systems
S. D'Aronco S. D'Aronco
EPFL EPFL
C. Ganzhorn C. Ganzhorn
October 9, 2015 March 18, 2016
NADA: A Unified Congestion Control Scheme for Real-Time Media NADA: A Unified Congestion Control Scheme for Real-Time Media
draft-ietf-rmcat-nada-01 draft-ietf-rmcat-nada-02
Abstract Abstract
This document describes NADA (network-assisted dynamic adaptation), a This document describes NADA (network-assisted dynamic adaptation), a
novel congestion control scheme for interactive real-time media novel congestion control scheme for interactive real-time media
applications, such as video conferencing. In the proposed scheme, applications, such as video conferencing. In the proposed scheme,
the sender regulates its sending rate based on either implicit or the sender regulates its sending rate based on either implicit or
explicit congestion signaling, in a unified approach. The scheme can explicit congestion signaling, in a unified approach. The scheme can
benefit from explicit congestion notification (ECN) markings from benefit from explicit congestion notification (ECN) markings from
network nodes. It also maintains consistent sender behavior in the network nodes. It also maintains consistent sender behavior in the
skipping to change at page 1, line 45 skipping to change at page 1, line 45
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 11, 2016. This Internet-Draft will expire on September 19, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. System Overview . . . . . . . . . . . . . . . . . . . . . . . 3 3. System Overview . . . . . . . . . . . . . . . . . . . . . . . 3
4. Core Congestion Control Algorithm . . . . . . . . . . . . . . 4 4. Core Congestion Control Algorithm . . . . . . . . . . . . . . 5
4.1. Mathematical Notations . . . . . . . . . . . . . . . . . 5 4.1. Mathematical Notations . . . . . . . . . . . . . . . . . 5
4.2. Receiver-Side Algorithm . . . . . . . . . . . . . . . . . 7 4.2. Receiver-Side Algorithm . . . . . . . . . . . . . . . . . 8
4.3. Sender-Side Algorithm . . . . . . . . . . . . . . . . . . 9 4.3. Sender-Side Algorithm . . . . . . . . . . . . . . . . . . 10
5. Practical Implementation of NADA . . . . . . . . . . . . . . 10 5. Practical Implementation of NADA . . . . . . . . . . . . . . 12
5.1. Receiver-Side Operation . . . . . . . . . . . . . . . . . 10 5.1. Receiver-Side Operation . . . . . . . . . . . . . . . . . 12
5.1.1. Estimation of one-way delay and queuing delay . . . . 11 5.1.1. Estimation of one-way delay and queuing delay . . . . 12
5.1.2. Estimation of packet loss/marking ratio . . . . . . . 11 5.1.2. Estimation of packet loss/marking ratio . . . . . . . 12
5.1.3. Estimation of receiving rate . . . . . . . . . . . . 11 5.1.3. Estimation of receiving rate . . . . . . . . . . . . 13
5.2. Sender-Side Operation . . . . . . . . . . . . . . . . . . 12 5.2. Sender-Side Operation . . . . . . . . . . . . . . . . . . 13
5.2.1. Rate shaping buffer . . . . . . . . . . . . . . . . . 12 5.2.1. Rate shaping buffer . . . . . . . . . . . . . . . . . 14
5.2.2. Adjusting video target rate and sending rate . . . . 13 5.2.2. Adjusting video target rate and sending rate . . . . 15
6. Discussions and Further Investigations . . . . . . . . . . . 13 5.3. Feedback Message Requirements . . . . . . . . . . . . . . 15
6.1. Choice of delay metrics . . . . . . . . . . . . . . . . . 13 6. Discussions and Further Investigations . . . . . . . . . . . 16
6.2. Method for delay, loss, and marking ratio estimation . . 14 6.1. Choice of delay metrics . . . . . . . . . . . . . . . . . 16
6.3. Impact of parameter values . . . . . . . . . . . . . . . 14 6.2. Method for delay, loss, and marking ratio estimation . . 16
6.4. Sender-based vs. receiver-based calculation . . . . . . . 15 6.3. Impact of parameter values . . . . . . . . . . . . . . . 17
6.5. Incremental deployment . . . . . . . . . . . . . . . . . 16 6.4. Sender-based vs. receiver-based calculation . . . . . . . 18
7. Implementation Status . . . . . . . . . . . . . . . . . . . . 16 6.5. Incremental deployment . . . . . . . . . . . . . . . . . 18
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 7. Implementation Status . . . . . . . . . . . . . . . . . . . . 18
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 17 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19
10.1. Normative References . . . . . . . . . . . . . . . . . . 17 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.2. Informative References . . . . . . . . . . . . . . . . . 17 10.1. Normative References . . . . . . . . . . . . . . . . . . 19
Appendix A. Network Node Operations . . . . . . . . . . . . . . 19 10.2. Informative References . . . . . . . . . . . . . . . . . 20
A.1. Default behavior of drop tail queues . . . . . . . . . . 19 Appendix A. Network Node Operations . . . . . . . . . . . . . . 21
A.2. RED-based ECN marking . . . . . . . . . . . . . . . . . . 19 A.1. Default behavior of drop tail queues . . . . . . . . . . 22
A.3. Random Early Marking with Virtual Queues . . . . . . . . 20 A.2. RED-based ECN marking . . . . . . . . . . . . . . . . . . 22
A.3. Random Early Marking with Virtual Queues . . . . . . . . 22
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 23
1. Introduction 1. Introduction
Interactive real-time media applications introduce a unique set of Interactive real-time media applications introduce a unique set of
challenges for congestion control. Unlike TCP, the mechanism used challenges for congestion control. Unlike TCP, the mechanism used
for real-time media needs to adapt quickly to instantaneous bandwidth for real-time media needs to adapt quickly to instantaneous bandwidth
changes, accommodate fluctuations in the output of video encoder rate changes, accommodate fluctuations in the output of video encoder rate
control, and cause low queuing delay over the network. An ideal control, and cause low queuing delay over the network. An ideal
scheme should also make effective use of all types of congestion scheme should also make effective use of all types of congestion
signals, including packet loss, queuing delay, and explicit signals, including packet loss, queuing delay, and explicit
congestion notification (ECN) [RFC3168] markings. The requirements congestion notification (ECN) [RFC3168] markings. The requirements
for the congestion control algorithm are outlined in for the congestion control algorithm are outlined in
[I-D.ietf-rmcat-cc-requirements]. [I-D.ietf-rmcat-cc-requirements].
This document describes an experimental congestion control scheme This document describes an experimental congestion control scheme
called network-assisted dynamic adaptation (NADA). The NADA design called network-assisted dynamic adaptation (NADA). The NADA design
benefits from explicit congestion control signals (e.g., ECN benefits from explicit congestion control signals (e.g., ECN
markings) from the network, yet also operates when only implicit markings) from the network, yet also operates when only implicit
congestion indicators (delay and/or loss) are available. In congestion indicators (delay and/or loss) are available. Such a
addition, it supports weighted bandwidth sharing among competing unified sender behavior distinguishes NADA from other congestion
video flows. The signaling mechanism consists of standard RTP control schemes for real-time media. In addition, its core
timestamp [RFC3550] and standard RTCP feedback reports. congestion control algorithm is designed to guarantee stability for
path round-trip-times (RTTs) below a prescribed bound (e.g., 250ms
with default parameter choices). It further supports weighted
bandwidth sharing among competing video flows with different
priorities. The signaling mechanism consists of standard RTP
timestamp [RFC3550] and RTCP feedback reports with non-standard
messages.
2. Terminology 2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described [RFC2119]. document are to be interpreted as described [RFC2119].
3. System Overview 3. System Overview
Figure 1 shows the end-to-end system for real-time media transport Figure 1 shows the end-to-end system for real-time media transport
that NADA operates in. that NADA operates in. Note that there also exist network nodes
along the reverse (potentially uncongested) path that the RTCP
feedback reports traverse. Those network nodes are not shown in the
figure for sake of abrevity.
+---------+ r_vin +--------+ +--------+ +----------+ +---------+ r_vin +--------+ +--------+ +----------+
| Media |<--------| RTP | |Network | | RTP | | Media |<--------| RTP | |Network | | RTP |
| Encoder |========>| Sender |=======>| Node |====>| Receiver | | Encoder |========>| Sender |=======>| Node |====>| Receiver |
+---------+ r_vout +--------+ r_send +--------+ +----------+ +---------+ r_vout +--------+ r_send +--------+ +----------+
/|\ | /|\ |
| | | |
+---------------------------------+ +---------------------------------+
RTCP Feedback Report RTCP Feedback Report
Figure 1: System Overview Figure 1: System Overview
o Media encoder with rate control capabilities. It encodes the o Media encoder with rate control capabilities. It encodes raw
source media stream into an RTP stream with target bit rate r_vin. media (audio and video) frames into compressed bitstream which is
The actual output rate from the encoder r_vout may fluctuate later packetized into RTP packets. As discussed in
around the target r_vin. In addition, the encoder can only change [I-D.ietf-rmcat-video-traffic-model], the actual output rate from
its bit rate at rather coarse time intervals, e.g., once every 0.5 the encoder r_vout may fluctuate around the target r_vin.
Furthermore, it is possible that the encoder can only react to bit
rate changes at rather coarse time intervals, e.g., once every 0.5
seconds. seconds.
o RTP sender: responsible for calculating the NADA reference rate o RTP sender: responsible for calculating the NADA reference rate
based on network congestion indicators (delay, loss, or ECN based on network congestion indicators (delay, loss, or ECN
marking reports from the receiver), for updating the video encoder marking reports from the receiver), for updating the video encoder
with a new target rate r_vin, and for regulating the actual with a new target rate r_vin, and for regulating the actual
sending rate r_send accordingly. The RTP sender also provides an sending rate r_send accordingly. The RTP sender also generates a
RTP timestamp for each outgoing packet. sending timestamp for each outgoing packet.
o RTP receiver: responsible for measuring and estimating end-to-end o RTP receiver: responsible for measuring and estimating end-to-end
delay based on sender RTP timestamp, packet loss and ECN marking delay (based on sender timestamp), packet loss (based on RTP
ratios, as well as receiving rate (r_recv) of the flow. It sequence number), ECN marking ratios (based on [RFC6679]), and
calculates the aggregated congestion signal (x_n) that accounts receiving rate (r_recv) of the flow. It calculates the aggregated
for queuing delay, ECN marking, and packet losses, and determines congestion signal (x_curr) that accounts for queuing delay, ECN
the mode for sender rate adaptation (rmode) based on whether the markings, and packet losses. The receiver also determines the
flow has encountered any standing non-zero congestion. The mode for sender rate adaptation (rmode) based on whether the flow
receiver sends periodic RTCP reports back to the sender, has encountered any standing non-zero congestion. The receiver
containing values of x_n, rmode, and r_recv. sends periodic RTCP reports back to the sender, containing values
of x_curr, rmode, and r_recv.
o Network node with several modes of operation. The system can work o Network node with several modes of operation. The system can work
with the default behavior of a simple drop tail queue. It can with the default behavior of a simple drop tail queue. It can
also benefit from advanced AQM features such as PIE, FQ-CoDel, also benefit from advanced AQM features such as PIE, FQ-CoDel,
RED-based ECN marking, and PCN marking using a token bucket RED-based ECN marking, and PCN marking using a token bucket
algorithm. Note that network node operation is out of scope for algorithm. Note that network node operation is out of control for
the design of NADA. the design of NADA.
4. Core Congestion Control Algorithm 4. Core Congestion Control Algorithm
Like TCP-Friendly Rate Control (TFRC) [Floyd-CCR00] [RFC5348], NADA Like TCP-Friendly Rate Control (TFRC) [Floyd-CCR00] [RFC5348], NADA
is a rate-based congestion control algorithm. In its simplest form, is a rate-based congestion control algorithm. In its simplest form,
the sender reacts to the collection of network congestion indicators the sender reacts to the collection of network congestion indicators
in the form of an aggregated congestion signal, and operates in one in the form of an aggregated congestion signal, and operates in one
of two modes: of two modes:
o Accelerated ramp-up: when the bottleneck is deemed to be o Accelerated ramp-up: when the bottleneck is deemed to be
underutilized, the rate increases multiplicatively with respect to underutilized, the rate increases multiplicatively with respect to
the rate of previously successful transmissions. The rate the rate of previously successful transmissions. The rate
increase mutliplier (gamma) is calculated based on observed round- increase mutliplier (gamma) is calculated based on observed round-
trip-time and target feedback interval, so as to limit self- trip-time and target feedback interval, so as to limit self-
inflicted queuing delay. inflicted queuing delay.
o Gradual rate update: in the presence of non-zero aggregate o Gradual rate update: in the presence of non-zero aggregate
congestion signal, the sending rate is adjusted in reaction to congestion signal, the sending rate is adjusted in reaction to
both its value (x_n) and its change in value (x_diff). both its value (x_curr) and its change in value (x_diff).
This section introduces the list of mathematical notations and This section introduces the list of mathematical notations and
describes the core congestion control algorithm at the sender and describes the core congestion control algorithm at the sender and
receiver, respectively. Additional details on recommended practical receiver, respectively. Additional details on recommended practical
implementations are described in Section 5.1 and Section 5.2. implementations are described in Section 5.1 and Section 5.2.
4.1. Mathematical Notations 4.1. Mathematical Notations
This section summarizes the list of variables and parameters used in This section summarizes the list of variables and parameters used in
the NADA algorithm. the NADA algorithm.
+--------------+-------------------------------------------------+ +--------------+-------------------------------------------------+
| Notation | Variable Name | | Notation | Variable Name |
+--------------+-------------------------------------------------+ +--------------+-------------------------------------------------+
| t_curr | Current timestamp | | t_curr | Current timestamp |
| t_last | Last time sending/receiving a feedback | | t_last | Last time sending/receiving a feedback |
| delta | Observed interval between current and previous | | delta | Observed interval between current and previous |
| | feedback reports: delta = t_curr-t_last | | | feedback reports: delta = t_curr-t_last |
| r_n | Reference rate based on network congestion | | r_ref | Reference rate based on network congestion |
| r_send | Sending rate | | r_send | Sending rate |
| r_recv | Receiving rate | | r_recv | Receiving rate |
| r_vin | Target rate for video encoder | | r_vin | Target rate for video encoder |
| r_vout | Output rate from video encoder | | r_vout | Output rate from video encoder |
| d_base | Estimated baseline delay | | d_base | Estimated baseline delay |
| d_fwd | Measured and filtered one-way delay | | d_fwd | Measured and filtered one-way delay |
| d_n | Estimated queueing delay | | d_queue | Estimated queueing delay |
| d_tilde | Equivalent delay after non-linear warping | | d_tilde | Equivalent delay after non-linear warping |
| p_mark | Estimated packet ECN marking ratio | | p_mark | Estimated packet ECN marking ratio |
| p_loss | Estimated packet loss ratio | | p_loss | Estimated packet loss ratio |
| x_n | Aggregate congestion signal | | x_curr | Aggregate congestion signal |
| x_prev | Previous value of aggregate congestion signal | | x_prev | Previous value of aggregate congestion signal |
| x_diff | Change in aggregate congestion signal w.r.t. | | x_diff | Change in aggregate congestion signal w.r.t. |
| | its previous value: x_diff = x_n - x_prev | | | its previous value: x_diff = x_curr - x_prev |
| rmode | Rate update mode: (0 = accelerated ramp-up; | | rmode | Rate update mode: (0 = accelerated ramp-up; |
| | 1 = gradual update) | | | 1 = gradual update) |
| gamma | Rate increase multiplier in accelerated ramp-up | | gamma | Rate increase multiplier in accelerated ramp-up |
| | mode | | | mode |
| rtt | Estimated round-trip-time at sender | | rtt | Estimated round-trip-time at sender |
| buffer_len | Rate shaping buffer occupancy measured in bytes | | buffer_len | Rate shaping buffer occupancy measured in bytes |
+--------------+-------------------------------------------------+ +--------------+-------------------------------------------------+
Figure 2: List of variables. Figure 2: List of variables.
+---------------+---------------------------------+----------------+ +---------------+---------------------------------+----------------+
| Notation | Parameter Name | Default Value | | Notation | Parameter Name | Default Value |
+--------------+----------------------------------+----------------+ +--------------+----------------------------------+----------------+
| PRIO | Weight of priority of the flow | 1.0 | PRIO | Weight of priority of the flow | 1.0
| RMIN | Minimum rate of application | 150 Kbps | | RMIN | Minimum rate of application | 150 Kbps |
| | supported by media encoder | | | | supported by media encoder | |
| RMAX | Maximum rate of application | 1.5 Mbps | | RMAX | Maximum rate of application | 1.5 Mbps |
| | supported by media encoder | | | | supported by media encoder | |
| X_REF | Reference congestion level | 20ms | | XREF | Reference congestion level | 20ms |
| KAPPA | Scaling parameter for gradual | 0.5 | | KAPPA | Scaling parameter for gradual | 0.5 |
| | rate update calculation | | | | rate update calculation | |
| ETA | Scaling parameter for gradual | 2.0 | | ETA | Scaling parameter for gradual | 2.0 |
| | rate update calculation | | | | rate update calculation | |
| TAU | Upper bound of RTT in gradual | 500ms | | TAU | Upper bound of RTT in gradual | 500ms |
| | rate update calculation | | | | rate update calculation | |
| DELTA | Target feedback interval | 100ms | | DELTA | Target feedback interval | 100ms |
| DFILT | Bound on filtering delay | 120ms |
| LOGWIN | Observation window in time for | 500ms | | LOGWIN | Observation window in time for | 500ms |
| | calculating packet summary | | | | calculating packet summary | |
| | statistics at receiver | | | | statistics at receiver | |
| QEPS | Threshold for determining queuing| 10ms | | QEPS | Threshold for determining queuing| 10ms |
| | delay build up at receiver | | | | delay build up at receiver | |
+..............+..................................+................+ +..............+..................................+................+
| QTH | Delay threshold for non-linear | 100ms | | QTH | Delay threshold for non-linear | 50ms |
| | warping | | | | warping | |
| QMAX | Delay upper bound for non-linear | 400ms | | QMAX | Delay upper bound for non-linear | 400ms |
| | warping | | | | warping | |
| DLOSS | Delay penalty for loss | 1.0s | | DLOSS | Delay penalty for loss | 1.0s |
| DMARK | Delay penalty for ECN marking | 200ms | | DMARK | Delay penalty for ECN marking | 200ms |
+..............+..................................+................+ +..............+..................................+................+
| GAMMA_MAX | Upper bound on rate increase | 20% | | GAMMA_MAX | Upper bound on rate increase | 50% |
| | ratio for accelerated ramp-up | | | | ratio for accelerated ramp-up | |
| QBOUND | Upper bound on self-inflicted | 50ms | | QBOUND | Upper bound on self-inflicted | 50ms |
| | queuing delay during ramp up | | | | queuing delay during ramp up | |
+..............+..................................+................+ +..............+..................................+................+
| FPS | Frame rate of incoming video | 30 | | FPS | Frame rate of incoming video | 30 |
| BETA_S | Scaling parameter for modulating | 0.1 | | BETA_S | Scaling parameter for modulating | 0.1 |
| | outgoing sending rate | | | | outgoing sending rate | |
| BETA_V | Scaling parameter for modulating | 0.1 | | BETA_V | Scaling parameter for modulating | 0.1 |
| | video encoder target rate | | | | video encoder target rate | |
| ALPHA | Smoothing factor in exponential | 0.1 | | ALPHA | Smoothing factor in exponential | 0.1 |
skipping to change at page 7, line 17 skipping to change at page 8, line 17
The receiver-side algorithm can be outlined as below: The receiver-side algorithm can be outlined as below:
On initialization: On initialization:
set d_base = +INFINITY set d_base = +INFINITY
set p_loss = 0 set p_loss = 0
set p_mark = 0 set p_mark = 0
set r_recv = 0 set r_recv = 0
set both t_last and t_curr as current time set both t_last and t_curr as current time
On receiving a media packet: On receiving a media packet:
obtain current timestamp t_curr obtain current timestamp t_curr from system clock
obtain from packet header sending time stamp t_sent obtain from packet header sending time stamp t_sent
obtain one-way delay measurement: d_fwd = t_curr - t_sent obtain one-way delay measurement: d_fwd = t_curr - t_sent
update baseline delay: d_base = min(d_base, d_fwd) update baseline delay: d_base = min(d_base, d_fwd)
update queuing delay: d_n = d_fwd - d_base update queuing delay: d_queue = d_fwd - d_base
update packet loss ratio estimate p_loss update packet loss ratio estimate p_loss
update packet marking ratio estimate p_mark update packet marking ratio estimate p_mark
update measurement of receiving rate r_recv update measurement of receiving rate r_recv
On time to send a new feedback report (t_curr - t_last > DELTA): On time to send a new feedback report (t_curr - t_last > DELTA):
calculate non-linear warping of delay d_tilde if packet loss exists calculate non-linear warping of delay d_tilde if packet loss exists
calculate aggregate congestion signal x_n calculate current aggregate congestion signal x_curr
determine mode of rate adaptation for sender: rmode determine mode of rate adaptation for sender: rmode
send RTCP feedback report containing values of: rmode, x_n, and r_recv send RTCP feedback report containing values of: rmode, x_curr, and r_recv
update t_last = t_curr update t_last = t_curr
In order for a delay-based flow to hold its ground when competing In order for a delay-based flow to hold its ground when competing
against loss-based flows (e.g., loss-based TCP), it is important to against loss-based flows (e.g., loss-based TCP), it is important to
distinguish between different levels of observed queuing delay. For distinguish between different levels of observed queuing delay. For
instance, a moderate queuing delay value below 100ms is likely self- instance, a moderate queuing delay value below 100ms is likely self-
inflicted or induced by other delay-based flows, whereas a high inflicted or induced by other delay-based flows, whereas a high
queuing delay value of several hundreds of milliseconds may indicate queuing delay value of several hundreds of milliseconds may indicate
the presence of a loss-based flow that does not refrain from the presence of a loss-based flow that does not refrain from
increased delay. increased delay.
When packet losses are observed, the estimated queuing delay follows When packet losses are observed, the estimated queuing delay follows
a non-linear warping inspired by the delay-adaptive congestion window a non-linear warping inspired by the delay-adaptive congestion window
backoff policy in [Budzisz-TON11]: backoff policy in [Budzisz-TON11]:
/ d_n, if d_n<QTH; / d_queue, if d_queue<QTH;
| |
| (QMAX - d_n)^4 | (QMAX - d_queue)^4
d_tilde = < QTH ----------------, if QTH<d_n<QMAX (1) d_tilde = < QTH ------------------, if QTH<d_queue<QMAX (1)
| (QMAX - QTH)^4 | (QMAX - QTH)^4
| |
\ 0, otherwise. \ 0, otherwise.
Here, the queuing delay value is unchanged when it is below the first Here, the queuing delay value is unchanged when it is below the first
threshold QTH; it is scaled down following a non-linear curve when threshold QTH; it is scaled down following a non-linear curve when
its value falls between QTH and QMAX; above QMAX, the high queuing its value falls between QTH and QMAX; above QMAX, the high queuing
delay value no longer counts toward congestion control. delay value no longer counts toward congestion control.
The aggregate congestion signal is: The aggregate congestion signal is:
x_n = d_tilde + p_mark*DMARK + p_loss*DLOSS. (2) x_curr = d_tilde + p_mark*DMARK + p_loss*DLOSS. (2)
Here, DMARK is prescribed delay penalty associated with ECN markings Here, DMARK is prescribed delay penalty associated with ECN markings
and DLOSS is prescribed delay penalty associated with packet losses. and DLOSS is prescribed delay penalty associated with packet losses.
The value of DLOSS and DMARK does not depend on configurations at the The value of DLOSS and DMARK does not depend on configurations at the
network node, but does assume that ECN markings, when available, network node. Since ECN-enabled active queue management schemes
occur before losses. Furthermore, the values of DLOSS and DMARK need typically mark a packet before dropping it, the value of DLOSS SHOULD
to be set consistently across all NADA flows for them to compete be higher than that of DMARK. Furthermore, the values of DLOS and
fairly. DMARK need to be set consistently across all NADA flows for them to
compete fairly.
In the absence of packet marking and losses, the value of x_n reduces In the absence of packet marking and losses, the value of x_curr
to the observed queuing delay d_n. In that case the NADA algorithm reduces to the observed queuing delay d_queue. In that case the NADA
operates in the regime of delay-based adaptation. algorithm operates in the regime of delay-based adaptation.
Given observed per-packet delay and loss information, the receiver is Given observed per-packet delay and loss information, the receiver is
also in a good position to determine whether the network is also in a good position to determine whether the network is
underutilized and recommend the corresponding rate adaptation mode underutilized and recommend the corresponding rate adaptation mode
for the sender. The criteria for operating in accelerated ramp-up for the sender. The criteria for operating in accelerated ramp-up
mode are: mode are:
o No recent packet losses within the observation window LOGWIN; and o No recent packet losses within the observation window LOGWIN; and
o No build-up of queuing delay: d_fwd-d_base < QEPS for all previous o No build-up of queuing delay: d_fwd-d_base < QEPS for all previous
delay samples within the observation window LOGWIN. delay samples within the observation window LOGWIN.
Otherwise the algorithm operates in graduate update mode. Otherwise the algorithm operates in graduate update mode.
4.3. Sender-Side Algorithm 4.3. Sender-Side Algorithm
The sender-side algorithm is outlined as follows: The sender-side algorithm is outlined as follows:
on initialization: on initialization:
set r_n = RMIN set r_ref = RMIN
set rtt = 0 set rtt = 0
set x_prev = 0 set x_prev = 0
set t_last and t_curr as current time set t_last and t_curr as current system clock time
on receiving feedback report: on receiving feedback report:
obtain current timestamp: t_curr obtain current timestamp from system clock: t_curr
obtain values of rmode, x_n, and r_recv from feedback report obtain values of rmode, x_curr, and r_recv from feedback report
update estimation of rtt update estimation of rtt
measure feedback interval: delta = t_curr - t_last measure feedback interval: delta = t_curr - t_last
if rmode == 0: if rmode == 0:
update r_n following accelerated ramp-up rules update r_ref following accelerated ramp-up rules
else: else:
update r_n following gradual update rules update r_ref following gradual update rules
clip rate r_n within the range of [RMIN, RMAX] clip rate r_ref within the range of [RMIN, RMAX]
x_prev = x_n x_prev = x_curr
t_last = t_curr t_last = t_curr
In accelerated ramp-up mode, the rate r_n is updated as follows: In accelerated ramp-up mode, the rate r_ref is updated as follows:
QBOUND QBOUND
gamma = min(GAMMA_MAX, -----------) (3) gamma = min(GAMMA_MAX, ------------------) (3)
rtt+DELTA rtt+DELTA+DFILT
r_n = (1+gamma) r_recv (4) r_ref = max(r_ref, (1+gamma) r_recv) (4)
The rate increase multiplier gamma is calculated as a function of The rate increase multiplier gamma is calculated as a function of
upper bound of self-inflicted queuing delay (QBOUND), round-trip-time upper bound of self-inflicted queuing delay (QBOUND), round-trip-time
(rtt), and target feedback interval DELTA. It has a maximum value of (rtt), target feedback interval (DELTA) and bound on filtering delay
for calculating d_queue (DFILT). It has a maximum value of
GAMMA_MAX. The rationale behind (3)-(4) is that the longer it takes GAMMA_MAX. The rationale behind (3)-(4) is that the longer it takes
for the sender to observe self-inflicted queuing delay build-up, the for the sender to observe self-inflicted queuing delay build-up, the
more conservative the sender should be in increasing its rate, hence more conservative the sender should be in increasing its rate, hence
the smaller the rate increase multiplier. the smaller the rate increase multiplier.
In gradual update mode, the rate r_n is updated as: In gradual update mode, the rate r_ref is updated as:
x_offset = x_n - PRIO*X_REF*RMAX/r_n (5) x_offset = x_curr - PRIO*XREF*RMAX/r_ref (5)
x_diff = x_n - x_prev (6) x_diff = x_curr - x_prev (6)
delta x_offset delta x_offset
r_n = r_n - KAPPA*-------*------------*r_n r_ref = r_ref - KAPPA*-------*------------*r_ref
TAU TAU TAU TAU
x_diff x_diff
- KAPPA*ETA*---------*r_n (7) - KAPPA*ETA*---------*r_ref (7)
TAU TAU
The rate changes in proportion to the previous rate decision. It is The rate changes in proportion to the previous rate decision. It is
affected by two terms: offset of the aggregate congestion signal from affected by two terms: offset of the aggregate congestion signal from
its value at equilibrium (x_offset) and its change (x_diff). its value at equilibrium (x_offset) and its change (x_diff).
Calculation of x_offset depends on maximum rate of the flow (RMAX), Calculation of x_offset depends on maximum rate of the flow (RMAX),
its weight of priority (PRIO), as well as a reference congestion its weight of priority (PRIO), as well as a reference congestion
signal (X_REF). The value of X_REF is chosen that the maximum rate signal (XREF). The value of XREF is chosen so that the maximum rate
of RMAX can be achieved when the observed congestion signal level is of RMAX can be achieved when the observed congestion signal level is
below PRIO*X_REF. below PRIO*XREF.
At equilibrium, the aggregated congestion signal stablizes at x_n = At equilibrium, the aggregated congestion signal stablizes at x_curr
PRIO*X_REF*RMAX/r_n. This ensures that when multiple flows share the = PRIO*XREF*RMAX/r_ref. This ensures that when multiple flows share
same bottleneck and observe a common value of x_n, their rates at the same bottleneck and observe a common value of x_curr, their rates
equilibrium will be proportional to their respective priority levels at equilibrium will be proportional to their respective priority
(PRIO) and maximum rate (RMAX). levels (PRIO) and maximum rate (RMAX). Values of RMIN and RMAX will
be provided by the media codec, as specified in
[I-D.ietf-rmcat-cc-codec-interactions]. In the absense of such
information, NADA sender will choose a default value of 0 for RMIN,
and 2Mbps for RMAX.
As mentioned in the sender-side algorithm, the final rate is clipped As mentioned in the sender-side algorithm, the final rate is clipped
within the dynamic range specified by the application: within the dynamic range specified by the application:
r_n = min(r_n, RMAX) (8) r_ref = min(r_ref, RMAX) (8)
r_n = max(r_n, RMIN) (9) r_ref = max(r_ref, RMIN) (9)
The above operations ignore many practical issues such as clock The above operations ignore many practical issues such as clock
synchronization between sender and receiver, filtering of noise in synchronization between sender and receiver, filtering of noise in
delay measurements, and base delay expiration. These will be delay measurements, and base delay expiration. These will be
addressed in later sections describing practical implementation of addressed in Section 5
the NADA algorithm.
5. Practical Implementation of NADA 5. Practical Implementation of NADA
5.1. Receiver-Side Operation 5.1. Receiver-Side Operation
The receiver continuously monitors end-to-end per-packet statistics The receiver continuously monitors end-to-end per-packet statistics
in terms of delay, loss, and/or ECN marking ratios. It then in terms of delay, loss, and/or ECN marking ratios. It then
aggregates all forms of congestion indicators into the form of an aggregates all forms of congestion indicators into the form of an
equivalent delay and periodically reports this back to the sender. equivalent delay and periodically reports this back to the sender.
In addition, the receiver tracks the receiving rate of the flow and In addition, the receiver tracks the receiving rate of the flow and
includes that in the feedback message. includes that in the feedback message.
5.1.1. Estimation of one-way delay and queuing delay 5.1.1. Estimation of one-way delay and queuing delay
The delay estimation process in NADA follows a similar approach as in The delay estimation process in NADA follows a similar approach as in
earlier delay-based congestion control schemes, such as LEDBAT earlier delay-based congestion control schemes, such as LEDBAT
[RFC6817]. NADA estimates the forward delay as having a constant [RFC6817]. Instead of relying on RTP timestamps, the NADA sender
base delay component plus a time varying queuing delay component. generates its own timestamp based on local system clock and embeds
The base delay is estimated as the minimum value of one-way delay that information in the transport packet header. The NADA receiver
observed over a relatively long period (e.g., tens of minutes), estimates the forward delay as having a constant base delay component
whereas the individual queuing delay value is taken to be the plus a time varying queuing delay component. The base delay is
difference between one-way delay and base delay. estimated as the minimum value of one-way delay observed over a
relatively long period (e.g., tens of minutes), whereas the
individual queuing delay value is taken to be the difference between
one-way delay and base delay. All delay estimations are based on
sender timestamps with higher granularity than RTP timestamps.
The individual sample values of queuing delay should be further The individual sample values of queuing delay should be further
filtered against various non-congestion-induced noise, such as spikes filtered against various non-congestion-induced noise, such as spikes
due to processing "hiccup" at the network nodes. Current due to processing "hiccup" at the network nodes. Current
implementation employs a 15-tab minimum filter over per-packet implementation employs a 15-tap minimum filter over per-packet
queuing delay estimates. queuing delay estimates.
5.1.2. Estimation of packet loss/marking ratio 5.1.2. Estimation of packet loss/marking ratio
The receiver detects packet losses via gaps in the RTP sequence The receiver detects packet losses via gaps in the RTP sequence
numbers of received packets. Packets arriving out-of-order are numbers of received packets. Packets arriving out-of-order are
discarded, and count towards losses. The instantaneous packet loss discarded, and count towards losses. The instantaneous packet loss
ratio p_inst is estimated as the ratio between the number of missing ratio p_inst is estimated as the ratio between the number of missing
packets over the number of total transmitted packets within the packets over the number of total transmitted packets within the
recent observation window LOGWIN. The packet loss ratio p_loss is recent observation window LOGWIN. The packet loss ratio p_loss is
obtained after exponential smoothing: obtained after exponential smoothing:
p_loss = ALPHA*p_inst + (1-ALPHA)*p_loss. (10) p_loss = ALPHA*p_inst + (1-ALPHA)*p_loss. (10)
The filtered result is reported back to the sender as the observed The filtered result is reported back to the sender as the observed
packet loss ratio p_loss. packet loss ratio p_loss.
Estimation of packet marking ratio p_mark follows the same procedure Estimation of packet marking ratio p_mark follows the same procedure
as above. It is assumed that ECN marking information at the IP as above. It is assumed that ECN marking information at the IP
header can be passed to the transport layer by the receiving header can be passed to the receiving endpoint, e.g., by following
endpoint. the mechanism described in [RFC6679].
5.1.3. Estimation of receiving rate 5.1.3. Estimation of receiving rate
It is fairly straighforward to estimate the receiving rate r_recv. It is fairly straighforward to estimate the receiving rate r_recv.
NADA maintains a recent observation window with time span of LOGWIN, NADA maintains a recent observation window with time span of LOGWIN,
and simply divides the total size of packets arriving during that and simply divides the total size of packets arriving during that
window over the time span. The receiving rate (r_recv) is included window over the time span. The receiving rate (r_recv) is included
as part of the feedback report. as part of the feedback report.
5.2. Sender-Side Operation 5.2. Sender-Side Operation
Figure 4 provides a detailed view of the NADA sender. Upon receipt Figure 4 provides a detailed view of the NADA sender. Upon receipt
of an RTCP feedback report from the receiver, the NADA sender of an RTCP feedback report from the receiver, the NADA sender
calculates the reference rate r_n as specified in Section 4.3. It calculates the reference rate r_ref as specified in Section 4.3. It
further adjusts both the target rate for the live video encoder r_vin further adjusts both the target rate for the live video encoder r_vin
and the sending rate r_send over the network based on the updated and the sending rate r_send over the network based on the updated
value of r_n and rate shaping buffer occupancy buffer_len. value of r_ref and rate shaping buffer occupancy buffer_len.
The NADA sender behavior stays the same in the presence of all types The NADA sender behavior stays the same in the presence of all types
of congestion indicators: delay, loss, and ECN marking. This unified of congestion indicators: delay, loss, and ECN marking. This unified
approach allows a graceful transition of the scheme as the network approach allows a graceful transition of the scheme as the network
shifts dynamically between light and heavy congestion levels. shifts dynamically between light and heavy congestion levels.
+----------------+ +----------------+
| Calculate | <---- RTCP report | Calculate | <---- RTCP report
| Reference Rate | | Reference Rate |
-----------------+ -----------------+
| r_n | r_ref
+------------+-------------+ +------------+-------------+
| | | |
\|/ \|/ \|/ \|/
+-----------------+ +---------------+ +-----------------+ +---------------+
| Calculate Video | | Calculate | | Calculate Video | | Calculate |
| Target Rate | | Sending Rate | | Target Rate | | Sending Rate |
+-----------------+ +---------------+ +-----------------+ +---------------+
| /|\ /|\ | | /|\ /|\ |
r_vin | | | | r_vin | | | |
\|/ +-------------------+ | \|/ +-------------------+ |
skipping to change at page 13, line 19 skipping to change at page 15, line 8
o To deplete the rate shaping buffer faster by increasing the o To deplete the rate shaping buffer faster by increasing the
sending rate r_send; and sending rate r_send; and
o To limit incoming packets of the rate shaping buffer by reducing o To limit incoming packets of the rate shaping buffer by reducing
the video encoder target rate r_vin. the video encoder target rate r_vin.
5.2.2. Adjusting video target rate and sending rate 5.2.2. Adjusting video target rate and sending rate
The target rate for the live video encoder deviates from the network The target rate for the live video encoder deviates from the network
congestion control rate r_n based on the level of occupancy in the congestion control rate r_ref based on the level of occupancy in the
rate shaping buffer: rate shaping buffer:
r_vin = r_n - BETA_V*8*buffer_len*FPS. (11) r_vin = r_ref - BETA_V*8*buffer_len*FPS. (11)
The actual sending rate r_send is regulated in a similar fashion: The actual sending rate r_send is regulated in a similar fashion:
r_send = r_n + BETA_S*8*buffer_len*FPS. (12) r_send = r_ref + BETA_S*8*buffer_len*FPS. (12)
In (11) and (12), the first term indicates the rate calculated from In (11) and (12), the first term indicates the rate calculated from
network congestion feedback alone. The second term indicates the network congestion feedback alone. The second term indicates the
influence of the rate shaping buffer. A large rate shaping buffer influence of the rate shaping buffer. A large rate shaping buffer
nudges the encoder target rate slightly below -- and the sending rate nudges the encoder target rate slightly below -- and the sending rate
slightly above -- the reference rate r_n. slightly above -- the reference rate r_ref.
Intuitively, the amount of extra rate offset needed to completely Intuitively, the amount of extra rate offset needed to completely
drain the rate shaping buffer within the duration of a single video drain the rate shaping buffer within the duration of a single video
frame is given by 8*buffer_len*FPS, where FPS stands for the frame frame is given by 8*buffer_len*FPS, where FPS stands for the frame
rate of the video. The scaling parameters BETA_V and BETA_S can be rate of the video. The scaling parameters BETA_V and BETA_S can be
tuned to balance between the competing goals of maintaining a small tuned to balance between the competing goals of maintaining a small
rate shaping buffer and deviating the system from the reference rate rate shaping buffer and deviating from the reference rate point.
point.
5.3. Feedback Message Requirements
The following list of information is required for NADA congestion
control to function properly:
o Recommended rate adaptation mode (rmode): a 1-bit flag indicating
whether the sender should operate in accelerated ramp-up mode
(rmode=0) or gradual update mode (rmode=1).
o Aggregated congestion signal (x_curr): the most recently updated
value, calculated by the receiver according to Section 4.2. This
information is expressed with a unit of 100 microsecond (i.e.,
1/10 of a millisecond) in 15 bits. This allows a maximum value of
x_curr at approximately 3.27 second.
o Receiving rate (r_recv): the most recently measured receiving rate
according to Section 5.1.3. This information is expressed with a
unit of 10 Kilobits per second (Kbps) in 16 bits. This allows a
maximum rate of approximately 6.55Mbps.
The above list of information can be accommodated by 32 bits in
total. Choice of the feedback message interval DELTA is discussed in
Section 6.3 A target feedback interval of DELTA=100ms is recommended.
6. Discussions and Further Investigations 6. Discussions and Further Investigations
6.1. Choice of delay metrics 6.1. Choice of delay metrics
The current design works with relative one-way-delay (OWD) as the The current design works with relative one-way-delay (OWD) as the
main indication of congestion. The value of the relative OWD is main indication of congestion. The value of the relative OWD is
obtained by maintaining the minimum value of observed OWD over a obtained by maintaining the minimum value of observed OWD over a
relatively long time horizon and subtract that out from the observed relatively long time horizon and subtract that out from the observed
absolute OWD value. Such an approach cancels out the fixed absolute OWD value. Such an approach cancels out the fixed
difference between the sender and receiver clocks. It has been difference between the sender and receiver clocks. It has been
widely adopted by other delay-based congestion control approaches widely adopted by other delay-based congestion control approaches
such as [RFC6817]. As discussed in [RFC6817], the time horizon for such as [RFC6817]. As discussed in [RFC6817], the time horizon for
tracking the minimum OWD needs to be chosen with care: it must be tracking the minimum OWD needs to be chosen with care: it must be
long enough for an opportunity to observe the minimum OWD with zero long enough for an opportunity to observe the minimum OWD with zero
queuing delay along the path, and sufficiently short so as to timely standing queue along the path, and sufficiently short so as to timely
reflect "true" changes in minimum OWD introduced by route changes and reflect "true" changes in minimum OWD introduced by route changes and
other rare events. other rare events.
The potential drawback in relying on relative OWD as the congestion The potential drawback in relying on relative OWD as the congestion
signal is that when multiple flows share the same bottleneck, the signal is that when multiple flows share the same bottleneck, the
flow arriving late at the network experiencing a non-empty queue may flow arriving late at the network experiencing a non-empty queue may
mistakenly consider the standing queuing delay as part of the fixed mistakenly consider the standing queuing delay as part of the fixed
path propagation delay. This will lead to slightly unfair bandwidth path propagation delay. This will lead to slightly unfair bandwidth
sharing among the flows. sharing among the flows.
skipping to change at page 15, line 31 skipping to change at page 17, line 42
feedback message --- less than 0.1% overhead for a 1 Mbps flow. feedback message --- less than 0.1% overhead for a 1 Mbps flow.
Furthermore, both simulation studies and frequency-domain analysis Furthermore, both simulation studies and frequency-domain analysis
have established that a feedback interval below 250ms will not break have established that a feedback interval below 250ms will not break
up the feedback control loop of NADA congestion control. up the feedback control loop of NADA congestion control.
In calculating the non-linear warping of delay in (1), the current In calculating the non-linear warping of delay in (1), the current
design uses fixed values of QTH and QMAX. It is possible to adapt design uses fixed values of QTH and QMAX. It is possible to adapt
the value of both based on past observations of queuing delay in the the value of both based on past observations of queuing delay in the
presence of packet losses. presence of packet losses.
In calculating the aggregate congestion signal x_n, the choice of In calculating the aggregate congestion signal x_curr, the choice of
DMARK and DLOSS influence the steady-state packet loss/marking ratio DMARK and DLOSS influence the steady-state packet loss/marking ratio
experienced by the flow at a given available bandwidth. Higher experienced by the flow at a given available bandwidth. Higher
values of DMARK and DLOSS result in lower steady-state loss/marking values of DMARK and DLOSS result in lower steady-state loss/marking
ratios, but are more susceptible to the impact of individual packet ratios, but are more susceptible to the impact of individual packet
loss/marking events. While the value of DMARK and DLOSS are fixed loss/marking events. While the value of DMARK and DLOSS are fixed
and predetermined in the current design, a scheme for automatically and predetermined in the current design, a scheme for automatically
tuning these values based on desired bandwidth sharing behavior in tuning these values based on desired bandwidth sharing behavior in
the presence of other competing loss-based flows (e.g., loss-based the presence of other competing loss-based flows (e.g., loss-based
TCP) is under investigation. TCP) is under investigation.
[Editor's note: Choice of start value: is this in scope of congestion [Editor's note: Choice of start value: is this in scope of congestion
control, or should this be decided by the application?] control, or should this be decided by the application?]
6.4. Sender-based vs. receiver-based calculation 6.4. Sender-based vs. receiver-based calculation
In the current design, the aggregated congestion signal x_n is In the current design, the aggregated congestion signal x_curr is
calculated at the receiver, keeping the sender operation completely calculated at the receiver, keeping the sender operation completely
independent of the form of actual network congestion indications independent of the form of actual network congestion indications
(delay, loss, or marking). Alternatively, one can move the logics of (delay, loss, or marking). Alternatively, one can move the logics of
(1) and (2) to the sender. Such an approach requires slightly higher (1) and (2) to the sender. Such an approach requires slightly higher
overhead in the feedback messages, which should contain individual overhead in the feedback messages, which should contain individual
fields on queuing delay (d_n), packet loss ratio (p_loss), packet fields on queuing delay (d_queue), packet loss ratio (p_loss), packet
marking ratio (p_mark), receiving rate (r_recv), and recommended rate marking ratio (p_mark), receiving rate (r_recv), and recommended rate
adaptation mode (rmode). adaptation mode (rmode).
6.5. Incremental deployment 6.5. Incremental deployment
One nice property of NADA is the consistent video endpoint behavior One nice property of NADA is the consistent video endpoint behavior
irrespective of network node variations. This facilitates gradual, irrespective of network node variations. This facilitates gradual,
incremental adoption of the scheme. incremental adoption of the scheme.
To start off with, the proposed congestion control mechanism can be To start off with, the proposed congestion control mechanism can be
skipping to change at page 16, line 37 skipping to change at page 18, line 48
standing queues and lower end-to-end delay and work seamlessly with standing queues and lower end-to-end delay and work seamlessly with
existing senders and receivers. existing senders and receivers.
7. Implementation Status 7. Implementation Status
The NADA scheme has been implemented in [ns-2] and [ns-3] simulation The NADA scheme has been implemented in [ns-2] and [ns-3] simulation
platforms. Extensive ns-2 simulation evaluations of an earlier platforms. Extensive ns-2 simulation evaluations of an earlier
version of the draft are documented in [Zhu-PV13]. Evaluation version of the draft are documented in [Zhu-PV13]. Evaluation
results of the current draft over several test cases in results of the current draft over several test cases in
[I-D.ietf-rmcat-eval-test] have been presented at recent IETF [I-D.ietf-rmcat-eval-test] have been presented at recent IETF
meetings [IETF-90][IETF-91]. meetings [IETF-90][IETF-91]. Evaluation results of the current draft
over several test cases in [I-D.ietf-rmcat-wireless-tests] have been
presented at [IETF-93].
The scheme has also been implemented and evaluated in a lab setting The scheme has also been implemented and evaluated in a lab setting
as described in [IETF-90]. Preliminary evaluation results of NADA in as described in [IETF-90]. Preliminary evaluation results of NADA in
single-flow and multi-flow scenarios have been presented in single-flow and multi-flow scenarios have been presented in
[IETF-91]. [IETF-91].
8. IANA Considerations 8. IANA Considerations
This document makes no request of IANA. This document makes no request of IANA.
skipping to change at page 17, line 18 skipping to change at page 19, line 27
O'Hanlon, Ingemar Johansson, Stefan Holmer, Cesar Ilharco Magalhaes, O'Hanlon, Ingemar Johansson, Stefan Holmer, Cesar Ilharco Magalhaes,
Safiqul Islam, Mirja Kuhlewind, and Karen Elisabeth Egede Nielsen for Safiqul Islam, Mirja Kuhlewind, and Karen Elisabeth Egede Nielsen for
their valuable questions and comments on earlier versions of this their valuable questions and comments on earlier versions of this
draft. draft.
10. References 10. References
10.1. Normative References 10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/ Requirement Levels", BCP 14, RFC 2119,
RFC2119, March 1997, DOI 10.17487/RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>. <http://www.rfc-editor.org/info/rfc2119>.
[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition
of Explicit Congestion Notification (ECN) to IP", RFC of Explicit Congestion Notification (ECN) to IP",
3168, DOI 10.17487/RFC3168, September 2001, RFC 3168, DOI 10.17487/RFC3168, September 2001,
<http://www.rfc-editor.org/info/rfc3168>. <http://www.rfc-editor.org/info/rfc3168>.
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V.
Jacobson, "RTP: A Transport Protocol for Real-Time Jacobson, "RTP: A Transport Protocol for Real-Time
Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550,
July 2003, <http://www.rfc-editor.org/info/rfc3550>. July 2003, <http://www.rfc-editor.org/info/rfc3550>.
[I-D.ietf-rmcat-eval-criteria] [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P.,
Singh, V. and J. Ott, "Evaluating Congestion Control for and K. Carlberg, "Explicit Congestion Notification (ECN)
Interactive Real-time Media", draft-ietf-rmcat-eval- for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August
criteria-03 (work in progress), March 2015. 2012, <http://www.rfc-editor.org/info/rfc6679>.
[I-D.ietf-rmcat-eval-test] [I-D.ietf-rmcat-eval-test]
Sarker, Z., Singh, V., Zhu, X., and M. Ramalho, "Test Sarker, Z., Varun, V., Zhu, X., and M. Ramalho, "Test
Cases for Evaluating RMCAT Proposals", draft-ietf-rmcat- Cases for Evaluating RMCAT Proposals", draft-ietf-rmcat-
eval-test-02 (work in progress), September 2015. eval-test-03 (work in progress), March 2016.
[I-D.ietf-rmcat-cc-requirements] [I-D.ietf-rmcat-cc-requirements]
Jesup, R. and Z. Sarker, "Congestion Control Requirements Jesup, R. and Z. Sarker, "Congestion Control Requirements
for Interactive Real-Time Media", draft-ietf-rmcat-cc- for Interactive Real-Time Media", draft-ietf-rmcat-cc-
requirements-09 (work in progress), December 2014. requirements-09 (work in progress), December 2014.
[I-D.ietf-rmcat-video-traffic-model]
Zhu, X., Cruz, S., and Z. Sarker, "Modeling Video Traffic
Sources for RMCAT Evaluations", draft-ietf-rmcat-video-
traffic-model-00 (work in progress), January 2016.
[I-D.ietf-rmcat-cc-codec-interactions]
Zanaty, M., Singh, V., Nandakumar, S., and Z. Sarker,
"Congestion Control and Codec interactions in RTP
Applications", draft-ietf-rmcat-cc-codec-interactions-01
(work in progress), October 2015.
[I-D.ietf-rmcat-wireless-tests]
Sarker, Z., Johansson, I., Zhu, X., Fu, J., Tan, W., and
M. Ramalho, "Evaluation Test Cases for Interactive Real-
Time Media over Wireless Networks", draft-ietf-rmcat-
wireless-tests-01 (work in progress), November 2015.
10.2. Informative References 10.2. Informative References
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
S., Wroclawski, J., and L. Zhang, "Recommendations on S., Wroclawski, J., and L. Zhang, "Recommendations on
Queue Management and Congestion Avoidance in the Queue Management and Congestion Avoidance in the
Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998,
<http://www.rfc-editor.org/info/rfc2309>. <http://www.rfc-editor.org/info/rfc2309>.
[RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP
Friendly Rate Control (TFRC): Protocol Specification", RFC Friendly Rate Control (TFRC): Protocol Specification",
5348, DOI 10.17487/RFC5348, September 2008, RFC 5348, DOI 10.17487/RFC5348, September 2008,
<http://www.rfc-editor.org/info/rfc5348>. <http://www.rfc-editor.org/info/rfc5348>.
[RFC6660] Briscoe, B., Moncaster, T., and M. Menth, "Encoding Three [RFC6660] Briscoe, B., Moncaster, T., and M. Menth, "Encoding Three
Pre-Congestion Notification (PCN) States in the IP Header Pre-Congestion Notification (PCN) States in the IP Header
Using a Single Diffserv Codepoint (DSCP)", RFC 6660, DOI Using a Single Diffserv Codepoint (DSCP)", RFC 6660,
10.17487/RFC6660, July 2012, DOI 10.17487/RFC6660, July 2012,
<http://www.rfc-editor.org/info/rfc6660>. <http://www.rfc-editor.org/info/rfc6660>.
[RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind,
"Low Extra Delay Background Transport (LEDBAT)", RFC 6817, "Low Extra Delay Background Transport (LEDBAT)", RFC 6817,
DOI 10.17487/RFC6817, December 2012, DOI 10.17487/RFC6817, December 2012,
<http://www.rfc-editor.org/info/rfc6817>. <http://www.rfc-editor.org/info/rfc6817>.
[Floyd-CCR00] [Floyd-CCR00]
Floyd, S., Handley, M., Padhye, J., and J. Widmer, Floyd, S., Handley, M., Padhye, J., and J. Widmer,
"Equation-based Congestion Control for Unicast "Equation-based Congestion Control for Unicast
Applications", ACM SIGCOMM Computer Communications Review Applications", ACM SIGCOMM Computer Communications
vol. 30, no. 4, pp. 43-56, October 2000. Review vol. 30, no. 4, pp. 43-56, October 2000.
[Budzisz-TON11] [Budzisz-TON11]
Budzisz, L., Stanojevic, R., Schlote, A., Baker, F., and Budzisz, L., Stanojevic, R., Schlote, A., Baker, F., and
R. Shorten, "On the Fair Coexistence of Loss- and Delay- R. Shorten, "On the Fair Coexistence of Loss- and Delay-
Based TCP", IEEE/ACM Transactions on Networking vol. 19, Based TCP", IEEE/ACM Transactions on Networking vol. 19,
no. 6, pp. 1811-1824, December 2011. no. 6, pp. 1811-1824, December 2011.
[Zhu-PV13] [Zhu-PV13]
Zhu, X. and R. Pan, "NADA: A Unified Congestion Control Zhu, X. and R. Pan, "NADA: A Unified Congestion Control
Scheme for Low-Latency Interactive Video", in Proc. IEEE Scheme for Low-Latency Interactive Video", in Proc. IEEE
skipping to change at page 19, line 17 skipping to change at page 21, line 40
Evalua6on Results", July 2014, Evalua6on Results", July 2014,
<https://tools.ietf.org/agenda/90/slides/slides-90-rmcat- <https://tools.ietf.org/agenda/90/slides/slides-90-rmcat-
6.pdf>. 6.pdf>.
[IETF-91] Zhu, X., Pan, R., Ramalho, M., Mena, S., Ganzhorn, C., [IETF-91] Zhu, X., Pan, R., Ramalho, M., Mena, S., Ganzhorn, C.,
Jones, P., and S. D'Aronco, "NADA Algorithm Update and Jones, P., and S. D'Aronco, "NADA Algorithm Update and
Test Case Evaluations", November 2014, Test Case Evaluations", November 2014,
<http://www.ietf.org/proceedings/interim/2014/11/09/rmcat/ <http://www.ietf.org/proceedings/interim/2014/11/09/rmcat/
slides/slides-interim-2014-rmcat-1-2.pdf>. slides/slides-interim-2014-rmcat-1-2.pdf>.
[IETF-93] Zhu, X., Pan, R., Ramalho, M., Mena, S., Ganzhorn, C.,
Jones, P., D'Aronco, S., and J. Fu, "Updates on NADA",
July 2015, <https://www.ietf.org/proceedings/93/slides/
slides-93-rmcat-0.pdf>.
Appendix A. Network Node Operations Appendix A. Network Node Operations
NADA can work with different network queue management schemes and NADA can work with different network queue management schemes and
does not assume any specific network node operation. As an example, does not assume any specific network node operation. As an example,
this appendix describes three variants of queue management behavior this appendix describes three variants of queue management behavior
at the network node, leading to either implicit or explicit at the network node, leading to either implicit or explicit
congestion signals. congestion signals.
In all three flavors described below, the network queue operates with In all three flavors described below, the network queue operates with
the simple first-in-first-out (FIFO) principle. There is no need to the simple first-in-first-out (FIFO) principle. There is no need to
maintain per-flow state. The system can scale easily with a large maintain per-flow state. The system can scale easily with a large
number of video flows and at high link capacity. number of video flows and at high link capacity.
A.1. Default behavior of drop tail queues A.1. Default behavior of drop tail queues
In a conventional network with drop tail or RED queues, congestion is In a conventional network with drop tail or RED queues, congestion is
inferred from the estimation of end-to-end delay and/or packet loss. inferred from the estimation of end-to-end delay and/or packet loss.
Packet drops at the queue are detected at the receiver, and Packet drops at the queue are detected at the receiver, and
contributes to the calculation of the aggregated congestion signal contributes to the calculation of the aggregated congestion signal
x_n. No special action is required at network node. x_curr. No special action is required at network node.
A.2. RED-based ECN marking A.2. RED-based ECN marking
In this mode, the network node randomly marks the ECN field in the IP In this mode, the network node randomly marks the ECN field in the IP
packet header following the Random Early Detection (RED) algorithm packet header following the Random Early Detection (RED) algorithm
[RFC2309]. Calculation of the marking probability involves the [RFC2309]. Calculation of the marking probability involves the
following steps: following steps:
on packet arrival: on packet arrival:
update smoothed queue size q_avg as: update smoothed queue size q_avg as:
skipping to change at page 20, line 23 skipping to change at page 22, line 43
| q_avg - q_lo | q_avg - q_lo
p= < p_max*--------------, if q_lo <= q < q_hi; p= < p_max*--------------, if q_lo <= q < q_hi;
| q_hi - q_lo | q_hi - q_lo
| |
\ p = 1, if q >= q_hi. \ p = 1, if q >= q_hi.
Here, q_lo and q_hi corresponds to the low and high thresholds of Here, q_lo and q_hi corresponds to the low and high thresholds of
queue occupancy. The maximum marking probability is p_max. queue occupancy. The maximum marking probability is p_max.
The ECN markings events will contribute to the calculation of an The ECN markings events will contribute to the calculation of an
equivalent delay x_n at the receiver. No changes are required at the equivalent delay x_curr at the receiver. No changes are required at
sender. the sender.
A.3. Random Early Marking with Virtual Queues A.3. Random Early Marking with Virtual Queues
Advanced network nodes may support random early marking based on a Advanced network nodes may support random early marking based on a
token bucket algorithm originally designed for Pre-Congestion token bucket algorithm originally designed for Pre-Congestion
Notification (PCN) [RFC6660]. The early congestion notification Notification (PCN) [RFC6660]. The early congestion notification
(ECN) bit in the IP header of packets are marked randomly. The (ECN) bit in the IP header of packets are marked randomly. The
marking probability is calculated based on a token-bucket algorithm marking probability is calculated based on a token-bucket algorithm
originally designed for the Pre-Congestion Notification (PCN) originally designed for the Pre-Congestion Notification (PCN)
[RFC6660]. The target link utilization is set as 90%; the marking [RFC6660]. The target link utilization is set as 90%; the marking
skipping to change at page 21, line 12 skipping to change at page 23, line 30
| |
\ 1, if b-b_tk>=b_hi. \ 1, if b-b_tk>=b_hi.
Here, the token bucket lower and upper limits are denoted by b_lo and Here, the token bucket lower and upper limits are denoted by b_lo and
b_hi, respectively. The parameter b indicates the size of the token b_hi, respectively. The parameter b indicates the size of the token
bucket. The parameter r is chosen to be below capacity, resulting in bucket. The parameter r is chosen to be below capacity, resulting in
slight under-utilization of the link. The maximum marking slight under-utilization of the link. The maximum marking
probability is p_max. probability is p_max.
The ECN markings events will contribute to the calculation of an The ECN markings events will contribute to the calculation of an
equivalent delay x_n at the receiver. No changes are required at the equivalent delay x_curr at the receiver. No changes are required at
sender. The virtual queuing mechanism from the PCN-based marking the sender. The virtual queuing mechanism from the PCN-based marking
algorithm will lead to additional benefits such as zero standing algorithm will lead to additional benefits such as zero standing
queues. queues.
Authors' Addresses Authors' Addresses
Xiaoqing Zhu Xiaoqing Zhu
Cisco Systems Cisco Systems
12515 Research Blvd., Building 4 12515 Research Blvd., Building 4
Austin, TX 78759 Austin, TX 78759
USA USA
 End of changes. 80 change blocks. 
161 lines changed or deleted 229 lines changed or added

This html diff was produced by rfcdiff 1.44. The latest version is available from http://tools.ietf.org/tools/rfcdiff/