Transport Area Working Group                                  B. Briscoe
Internet-Draft                                                        BT
Updates: 2309 (if approved)                                    J. Manner
Intended status: Informational BCP                                    Aalto University
Expires: September 15, 2011                               March 14, May 3, 2012                                    October 31, 2011

                Byte and Packet Congestion Notification
                  draft-ietf-tsvwg-byte-pkt-congest-04
                  draft-ietf-tsvwg-byte-pkt-congest-05

Abstract

   This memo concerns dropping or marking packets using active queue
   management (AQM) such as random early detection (RED) or pre-
   congestion notification (PCN).  We give three strong recommendations:
   (1) packet size should be taken into account when transports read and
   respond to congestion indications, (2) packet size should not be
   taken into account when network equipment creates congestion signals
   (marking, dropping), and therefore (3) the byte-mode packet drop
   variant of the RED AQM algorithm that drops fewer small packets
   should not be used.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 15, 2011. May 3, 2012.

Copyright Notice

   Copyright (c) 2011 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  5  4
     1.1.  Terminology and Scoping  . . . . . . . . . . . . . . . . .  6
     1.2.  Example Comparing Packet-Mode Drop and Byte-Mode Drop  . .  7
   2.  Recommendations  . . . . . . . . . . . . . . . . . . . . . . .  8
     2.1.  Recommendation on Queue Measurement  . . . . . . . . . . .  8  9
     2.2.  Recommendation on Notifying Encoding Congestion . . . . . . Notification . . . .  9
     2.3.  Recommendation on Responding to Congestion . . . . . . . . 10
     2.4.  Recommendation on Handling Congestion Indications when
           Splitting or Merging Packets . . . . . . . . . . . . . . . 11
   3.  Motivating Arguments . . . . . . . . . . . . . . . . . . . . . 11
     3.1.  Scaling Congestion Control with Packet Size  . . . . . . . 11
     3.2.  Transport-Independent Network  . . . . . . . . . . . . . . 12
     3.3.  Avoiding Perverse Incentives to (Ab)use Smaller Packets  . 13
     3.4. 12
     3.2.  Small != Control . . . . . . . . . . . . . . . . . . . . . 14
     3.5.  Implementation Efficiency  . . . . 13
     3.3.  Transport-Independent Network  . . . . . . . . . . . . 14
     3.6.  Why now? . . 13
     3.4.  Scaling Congestion Control with Packet Size  . . . . . . . 14
     3.5.  Implementation Efficiency  . . . . . . . . . . . . . . . . 14 16
   4.  A Survey and Critique of Past Advice . . . . . . . . . . . . . 15 16
     4.1.  Congestion Measurement Advice  . . . . . . . . . . . . . . 16
       4.1.1.  Fixed Size Packet Buffers  . . . . . . . . . . . . . . 17
       4.1.2.  Congestion Measurement without a Queue . . . . . . . . 18
     4.2.  Congestion Notification Advice . . . . . . . . . . . . . . 18 19
       4.2.1.  Network Bias when Encoding . . . . . . . . . . . . . . 18 19
       4.2.2.  Transport Bias when Decoding . . . . . . . . . . . . . 20 21
       4.2.3.  Making Transports Robust against Control Packet
               Losses . . . . . . . . . . . . . . . . . . . . . . . . 22
       4.2.4.  Congestion Notification: Summary of Conflicting
               Advice . . . . . . . . . . . . . . . . . . . . . . . . 22
       4.2.5.  RED Implementation Status  . . . . . . . . . . . . . . 23
   5.  Outstanding Issues and Next Steps  . . . . . . . . . . . . . . 24
     5.1.  Bit-congestible World  . Network  . . . . . . . . . . . . . . . . . 24
     5.2.  Bit- & Packet-congestible World  . Network  . . . . . . . . . . . . 25 24
   6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 26 24
   7.  Conclusions  . . . . . . . . . . . . . . . . . . . . . . . . . 27 25
   8.  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 28 26
   9.  Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 28 27
   10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 28 27
     10.1. Normative References . . . . . . . . . . . . . . . . . . . 28 27
     10.2. Informative References . . . . . . . . . . . . . . . . . . 29 27
   Appendix A.  Idealised Wire Protocol . . . . . . .  Survey of RED Implementation Status . . . . . . . . 33
     A.1.  Protocol Coding . 31
   Appendix B.  Sufficiency of Packet-Mode Drop . . . . . . . . . . . 32
     B.1.  Packet-Size (In)Dependence in Transports . . . . . . . . . 33
     A.2.  Example Scenarios  . . . . . . . . . . . . . . . . . . . . 34
       A.2.1.  Notation . . . . . . . . . . . . . . . . . . . . . . . 34
       A.2.2.  Bit-congestible resource, equal bit rates (Ai) . .
     B.2.  Bit-Congestible and Packet-Congestible Indications . . 35
       A.2.3.  Bit-congestible resource, equal packet rates (Bi) . . 36
       A.2.4.  Pkt-congestible resource, equal bit rates (Aii)  . . . 37
       A.2.5.  Pkt-congestible resource, equal packet rates (Bii) . . 37

   Appendix B. C.  Byte-mode Drop Complicates Policing Congestion
                Response  . . . . . . . . . . . . . . . . . . . . . . 38 37
   Appendix C. D.  Changes from Previous Versions  . . . . . . . . . . . 39 38

1.  Introduction

   This memo is initially concerned with concerns how we should correctly scale congestion control
   functions with packet size for the long term.  But
   it  It also recognises
   that expediency may be necessary to deal with existing widely
   deployed protocols that don't live up to the long term goal.

   When notifying congestion, the problem of how (and whether) to take
   packet sizes into account has exercised the minds of researchers and
   practitioners for as long as active queue management (AQM) has been
   discussed.  Indeed, one reason AQM was originally introduced was to
   reduce the lock-out effects that small packets can have on large
   packets in drop-tail queues.  This memo aims to state the principles
   we should be using and to come to conclusions on what outline how these principles will mean for affect
   future protocol design, taking into account the existing deployments
   we have already.

   The byte vs. packet dilemma arises question of whether to take into account packet size arises at
   three stages in the congestion notification process:

   Measuring congestion:  When the a congested resource decides measures locally to
      measure how
      congested it is, should the queue it measure its queue length in bytes or
      packets?

   Encoding congestion notification into the wire protocol:  When the a
      congested network resource decides whether to notify the notifies its level of
      congestion by dropping or marking a particular packet, congestion,
      should its
      decision depend it drop / mark each packet dependent on the byte-size of
      the particular packet being
      dropped or marked? in question?

   Decoding congestion notification from the wire protocol:  When the a
      transport interprets the notification in order to decide how much
      to respond to congestion, should it take into account the byte-
      size of each missing or marked packet?

   Consensus has emerged over the years concerning the first stage:
   whether queues are measured in bytes or packets, termed byte-mode
   queue measurement or packet-mode queue measurement.  This  Section 2.1 of
   this memo records this consensus in the RFC Series.  In summary the
   choice solely depends on whether the resource is congested by bytes
   or packets.

   The controversy is mainly around the last two stages: whether to
   allow for the size of the specific packet notifying congestion i)
   when the network encodes or ii) when the transport decodes the
   congestion notification.

   Currently, the RFC series is silent on this matter other than a paper
   trail of advice referenced from [RFC2309], which conditionally
   recommends byte-mode (packet-size dependent) drop [pktByteEmail].
   Reducing drop of small packets certainly has some tempting
   advantages: i) it drops less control packets, which tend to be small
   and ii) it makes TCP's bit-rate less dependent on packet size.
   However, there are ways of addressing these issues at the transport
   layer, rather than reverse engineering network forwarding to fix the
   problems of one specific transport, as byte-mode variant of RED was
   designed to do.

   The primary purpose of this
   problems.

   This memo is updates [RFC2309] to build a definitive consensus
   against deprecate deliberate preferential
   treatment for of small packets in AQM
   algorithms and to record this advice within the RFC series. algorithms.  It recommends that (1)
   packet size should be taken into account when transports read
   congestion indications, (2) not when network equipment writes them.

   In particular this means that the byte-mode packet drop variant of
   RED
   Random early Detection (RED) should not be used to drop fewer small
   packets, because that creates a perverse incentive for transports to
   use tiny segments, consequently also opening up a DoS vulnerability.
   Fortunately all the RED implementers who responded to our admittedly
   limited survey (Section 4.2.4) have not followed the earlier advice
   to use byte-mode drop, so the
   consensus position this memo argues for seems to
   already exist in implementations.

   However, at the transport layer, TCP congestion control is a widely
   deployed protocol that doesn't scale correctly with packet size.  To date this
   hasn't been a significant problem because most TCPs TCP implementations
   have been used with similar packet sizes.  But, as we design new
   congestion controls, control mechanisms, the current recommendation is that we
   should build in scaling with packet size rather than assuming we
   should follow TCP's example.

   This memo continues as follows.  First it discusses terminology and
   scoping.  Section 2 gives the concrete formal recommendations,
   followed by motivating arguments in Section 3.  We then critically
   survey the advice given previously in the RFC series and the research
   literature (Section 4), followed by referring to an assessment of whether or not
   this advice has been followed in production networks (Section 4.2.5). (Appendix A).
   To wrap up, outstanding issues are discussed that will need
   resolution both to inform future protocols protocol designs and to handle
   legacy (Section 5).  Then security issues are collected together in
   Section 6 before conclusions are drawn in Section 7.  The interested
   reader can find discussion of more detailed issues on the theme of
   byte vs. packet in the appendices.

   This memo intentionally includes a non-negligible amount of material
   on the subject.  A  For the busy reader can jump right into Section 2 to read
   a summary of summarises the
   recommendations for the Internet community.

1.1.  Terminology and Scoping

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in [RFC2119].

   Congestion Notification:  Rather than aim to achieve what many have
      tried and failed, this memo will not try to define congestion.  It
      will give a working definition of what congestion notification
      should be taken to mean for this document.  Congestion notification is a changing
      signal that aims to communicate the
      ratio E/L. E is probability that the instantaneous excess network
      resource(s) will not be able to forward the level of traffic load
      offered to a
      resource (or that it is either incapable of serving or unwilling to
      serve.  L there is the instantaneous offered load. an impending risk that they will not be
      able to).

      The phrase `unwilling to serve' `impending risk' qualifier is added, because AQM systems (e.g.
      RED, PCN [RFC5670]) set a virtual limit smaller than the actual
      limit to the resource, then notify when this virtual limit is
      exceeded in order to avoid uncontrolled congestion of the actual
      capacity.

      Note that the denominator is offered load, not capacity.
      Therefore congestion

      Congestion notification is communicates a real number bounded by the
      range [0,1].  This ties in with the most well-understood measure
      of congestion notification: drop probability (often loosely called
      loss rate).  It also means that congestion has a natural
      interpretation as a probability; the probability of offered
      traffic not being served (or being marked as at risk of not being
      served). probability.

   Explicit and Implicit Notification:  The byte vs. packet dilemma
      concerns congestion notification irrespective of whether it is
      signalled implicitly by drop or using explicit congestion
      notification (ECN [RFC3168] or PCN [RFC5670]).  Throughout this
      document, unless clear from the context, the term marking will be
      used to mean notifying congestion explicitly, while congestion
      notification will be used to mean notifying congestion either
      implicitly by drop or explicitly by marking.

   Bit-congestible vs. Packet-congestible:  If the load on a resource
      depends on the rate at which packets arrive, it is called packet-
      congestible.  If the load depends on the rate at which bits arrive
      it is called bit-congestible.

      Examples of packet-congestible resources are route look-up engines
      and firewalls, because load depends on how many packet headers
      they have to process.  Examples of bit-congestible resources are
      transmission links, radio power and most buffer memory, because
      the load depends on how many bits they have to transmit or store.
      Some machine architectures use fixed size packet buffers, so
      buffer memory in these cases is packet-congestible (see
      Section 4.1.1).

      Currently a design goal of network processing equipment such as
      routers and firewalls is to keep packet processing uncongested
      even under worst case bit packet rates with runs of minimum packet sizes. size
      packets.  Therefore, packet-congestion is currently rare [RFC6077;
      S.3.3], but there is no guarantee that it will not become more
      common with
      future technology trends. in future.

      Note that information is generally processed or transmitted with a
      minimum granularity greater than a bit (e.g. octets).  The
      appropriate granularity for the resource in question should be
      used, but for the sake of brevity we will talk in terms of bytes
      in this memo.

   Coarser Granularity:  Resources may be congestible at higher levels
      of granularity than bits or packets, for instance stateful
      firewalls are flow-congestible and call-servers are session-
      congestible.  This memo focuses on congestion of connectionless
      resources, but the same principles may be applicable for
      congestion notification protocols controlling per-flow and per-
      session processing or state.

   RED Terminology:  In RED, RED whether to use packets or bytes when
      measuring queues is called respectively packet-mode "packet-mode queue
      measurement
      measurement" or byte-mode "byte-mode queue measurement. measurement".  And whether the
      probability of dropping a particular packet is independent or
      dependent on its byte-size is called respectively packet-mode drop "packet-mode
      drop" or byte-mode
      drop. "byte-mode drop".  The terms byte-mode and packet-mode
      should not be used without specifying whether they apply to queue
      measurement or to drop.

2.  Recommendations

2.1.  Recommendation on Queue Measurement

   Queue length

1.2.  Example Comparing Packet-Mode Drop and Byte-Mode Drop

   A central question addressed by this document is usually the most correct whether to recommend
   RED's packet-mode drop and simplest way to measure
   congestion deprecate byte-mode drop.  Table 1
   compares how packet-mode and byte-mode drop affect two flows of a resource.  To avoid
   different size packets.  For each it gives the pathological effects expected number of drop
   tail, an AQM function can then be used to transform queue length into
   the probability
   packets and of dropping or marking a packet (e.g.  RED's
   piecewise linear function between thresholds).

   If bits dropped in one second.  Each example flow runs at
   the resource same bit-rate of 48Mb/s, but one is bit-congestible, the implementation SHOULD measure broken up into small 60 byte
   packets and the length of other into large 1500 byte packets.

   To keep up the queue same bit-rate, in bytes.  If one second there are about 25 times
   more small packets because they are 25 times smaller.  As can be seen
   from the resource is packet-
   congestible, table, the implementation SHOULD measure packet rate is 100,000 small packets versus 4,000
   large packets per second (pps).

      Parameter            Formula        Small packets Large packets
      -------------------- -------------- ------------- -------------
      Packet size          s/8                      60B        1,500B
      Packet size          s                       480b       12,000b
      Bit-rate             x                     48Mbps        48Mbps
      Packet-rate          u = x/s              100kpps         4kpps

      Packet-mode Drop
      Pkt loss probability p                       0.1%          0.1%
      Pkt loss-rate        p*u                   100pps          4pps
      Bit loss-rate        p*u*s                 48kbps        48kbps

      Byte-mode Drop       MTU, M=12,000b
      Pkt loss probability b = p*s/M             0.004%          0.1%
      Pkt loss-rate        b*u                     4pps          4pps
      Bit loss-rate        b*u*s               1.92kbps        48kbps

         Table 1: Example Comparing Packet-mode and Byte-mode Drop

   For packet-mode drop, we illustrate the length effect of a drop probability
   of 0.1%, which the
   queue in packets.  No other choice makes sense, because the number algorithm applies to all packets irrespective of
   size.  Because there are 25 times more small packets waiting in the queue isn't relevant if the resource gets
   congested by bytes and vice versa.

   Corollaries:

   1.  A RED implementation SHOULD use byte mode queue measurement for
       measuring the congestion of bit-congestible resources one second,
   it naturally drops 25 times more small packets, that is 100 small
   packets but only 4 large packets.  But if we count how many bits it
   drops, there are 48,000 bits in 100 small packets and packet
       mode queue measurement for packet-congestible resources.

   2.  "An Admin SHOULD NOT be able to configure 48,000 bits in
   4 large packets--the same number of bits of small packets as large.

      The packet-mode drop algorithm drops any bit with the way a queue
       measures itself, because wether a queue same
      probability whether the bit is bit-congestible in a small or
       packet-congestible is a property large packet.

   For byte-mode drop, again we use an example drop probability of the resource."

   The recommended approach in less straightforward scenarios, such as
   fixed 0.1%,
   but only for maximum size buffers, and resources without a queue, is discussed in
   Section 4.1.

2.2.  Recommendation on Notifying Congestion

   When notifying congestion, a network device SHOULD treat all packets
   equally, regardless (assuming the link MTU is 1,500B or
   12,000b).  The byte-mode algorithm reduces the drop probability of
   smaller packets proportional to their size.  Therefore, size, making the probability
   that
   network equipment it drops or marks a small packet to notify congestion SHOULD
   NOT depend on 25 times smaller at 0.004%.  But there
   are 25 times more small packets, so dropping them with 25 times lower
   probability results in dropping the size same number of packets: 4 drops
   in both cases.  The 4 small dropped packets contain 25 times less
   bits than the packet.  For instance, 4 large dropped packets: 1,920 compared to 48,000.

      The byte-mode drop algorithm drops any bit with a probability 0.1% it is only necessary to drop every packet with
   probability 0.1% without regard
      proportionate to the size of each packet.

   This means that the Internet's congestion notification protocols
   (drop, ECN & PCN) SHOULD NOT take account of packet size when
   congestion it is notified by network equipment.  Allowance for packet
   size in.

2.  Recommendations
2.1.  Recommendation on Queue Measurement

   Queue length is only appropriate when usually the transport responds to congestion
   (See Recommendation 2.3).  This approach offers sufficient and most correct congestion information for all known and future transport
   protocols and also ensures no perverse incentives are created that
   would encourage transports simplest way to use inappropriately small packet sizes.

   Corollaries:

   1.  AQM algorithms such as RED SHOULD NOT use byte-mode drop, which
       deflates RED's measure
   congestion of a resource.  To avoid the pathological effects of drop
   tail, an AQM function can then be used to transform queue length into
   the probability for smaller of dropping or marking a packet sizes. (e.g.  RED's
       byte-mode drop has no enduring advantages.  It
   piecewise linear function between thresholds).

   If the resource is more complex,
       it creates bit-congestible, the perverse incentive to fragment segments into tiny
       pieces and it reopens implementation SHOULD measure
   the vulnerability to floods length of small-
       packets that drop-tail queues suffered from and AQM was designed
       to remove.

   2. the queue in bytes.  If a vendor has implemented byte-mode drop, and an operator has
       turned it on, it the resource is strongly RECOMMENDED that it SHOULD be turned
       off.  Note that RED as a whole packet-
   congestible, the implementation SHOULD NOT be turned off, as
       without it, a drop tail measure the length of the
   queue also biases against large in packets.
       But note also that turning off byte-mode drop may alter  No other choice makes sense, because the
       relative performance number of applications using different packet
       sizes, so it would be advisable to establish
   packets waiting in the implications
       before turning it off.

       NOTE WELL that RED's byte-mode queue drop is completely
       orthogonal to byte-mode queue measurement isn't relevant if the resource gets
   congested by bytes and should not be
       confused with it.  If a vice versa.

   Corollaries:

   1.  A RED implementation has a byte-mode but
       does not specify what sort of byte-mode, it is most probably
       byte-mode queue measurement, which is fine.  However, if in
       doubt, the vendor should be consulted.

   The SHOULD use byte mode packet drop variant of RED was recommended in the past
   (see Section 4.2.1 queue measurement for how thinking evolved).  However, our survey
       measuring the congestion of
   84 vendors across bit-congestible resources and packet
       mode queue measurement for packet-congestible resources.

   2.  An implementation SHOULD NOT make it possible to configure the industry (Section 4.2.5) has found that none
       way a queue measures itself, because whether a queue is bit-
       congestible or packet-congestible is an inherent property of the 19% who responded have implemented byte mode drop
       queue.

   The recommended approach in RED.  Given
   there appears to be little, if any, installed base it seems we can
   deprecate byte-mode drop less straightforward scenarios, such as
   fixed size buffers, and resources without a queue, is discussed in RED with little, if any, incremental
   deployment impact.

2.3.
   Section 4.1.

2.2.  Recommendation on Responding to Encoding Congestion Notification

   When encoding congestion notification (e.g. by drop, ECN & PCN), a transport detects
   network device SHOULD treat all packets equally, regardless of their
   size.  In other words, the probability that network equipment drops
   or marks a particular packet has been lost or to notify congestion
   marked, it SHOULD consider NOT depend
   on the strength size of the congestion indication
   as proportionate to the size packet in octets of question.  As the missing or marked
   packet.

   In other words, when a packet indicates congestion (by being lost or
   marked) example in Section 1.2
   illustrates, to drop any bit with probability 0.1% it can be considered conceptually as if there is a congestion
   indication on only
   necessary to drop every octet of packet with probability 0.1% without regard
   to the packet, not just one indication per
   packet.

   Therefore, instead size of each packet.

   This approach ensures the network equipment biasing its layer offers sufficient congestion
   notification in favour of small packets, the IETF
   information for all known and future transport area
   should continue its programme of;

   o  updating host-based congestion control protocols to take account
      of packet size

   o  making and also
   ensures no perverse incentives are created that would encourage
   transports less sensitive to losing control packets like
      SYNs and pure ACKs. use inappropriately small packet sizes.

   Corollaries:

   1.  If two TCPs with different  AQM algorithms such as RED SHOULD NOT use byte-mode drop, which
       deflates RED's drop probability for smaller packet sizes are required sizes.  RED's
       byte-mode drop has no enduring advantages.  It is more complex,
       it creates the perverse incentive to run at
       equal bit rates under fragment segments into tiny
       pieces and it reopens the same path conditions, this SHOULD be
       done by altering TCP (Section 4.2.2), not network equipment,
       which would otherwise affect other transports besides TCP. vulnerability to floods of small-
       packets that drop-tail queues suffered from and AQM was designed
       to remove.

   2.  If a vendor has implemented byte-mode drop, and an operator has
       turned it on, it is desired RECOMMENDED to improve TCP performance by reducing the
       chance that a SYN or turn it off.  Note that RED as
       a pure ACK will be dropped, this should be
       done by modifying TCP (Section 4.2.3), not network equipment.

2.4.  Recommendation on Handling Congestion Indications when Splitting
      or Merging Packets

   Packets carrying congestion indications may whole SHOULD NOT be split or merged (e.g.
   at turned off, as without it, a transcoder or during fragment reassembly).  Splitting and
   merging only make sense in the context of ECN, not loss.

   The general rule to follow is drop tail
       queue also biases against large packets.  But note also that
       turning off byte-mode drop may alter the number relative performance of octets in packets
   with congestion indications should
       applications using different packet sizes, so it would be roughly
       advisable to establish the same implications before turning it off.

       NOTE WELL that RED's byte-mode queue drop is completely
       orthogonal to byte-mode queue measurement and
   after merging or splitting.  This should not be
       confused with it.  If a RED implementation has a byte-mode but
       does not specify what sort of byte-mode, it is based on most probably
       byte-mode queue measurement, which is fine.  However, if in
       doubt, the principle used
   above; vendor should be consulted.

   A survey (Appendix A) showed that an indication there appears to be little, if any,
   installed base of congestion the byte-mode drop variant of RED.  This suggests
   that deprecating byte-mode drop will have little, if any, incremental
   deployment impact.

2.3.  Recommendation on Responding to Congestion

   When a transport detects that a packet can be considered
   as an indication of has been lost or congestion on each octet of
   marked, it SHOULD consider the packet.

   One can think strength of a splitting or merging process as if all the
   incoming congestion-marked octets increment a counter and all the
   outgoing marked octets decrement the same counter.  In order to
   ensure that congestion indications remain timely, even indication
   as proportionate to the smallest
   positive remainder size in octets (bytes) of the conceptual counter should trigger the next
   outgoing packet to be missing or
   marked (causing the counter to go negative).

3.  Motivating Arguments packet.

   In this section, we evaluate the topic of other words, when a packet vs. byte based indicates congestion notifications and motivate the recommendations given in
   this document.

3.1.  Scaling Congestion Control with Packet Size

   There are two ways of interpreting a dropped (by being lost or marked packet.  It
   marked) it can either be considered conceptually as if there is a single loss event or as loss/marking congestion
   indication on every octet of the bytes in the packet, not just one indication per
   packet.

   Consider a bit-congestible link shared by many flows (bit-congestible
   is

   Therefore, the more common case, see Section 1.1), so that each busy period
   tends IETF transport area should continue its programme of;

   o  updating host-based congestion control protocols to cause packets take account
      of packet size

   o  making transports less sensitive to be lost from different flows.  Consider
   further two sources that have the same data rate but break the load
   into large losing control packets in one application (A) like
      SYNs and small packets in the
   other (B).  Of course, because the load is the same, there will be
   proportionately more packets in the small packet flow (B). pure ACKs.

   Corollaries:

   1.  If a congestion control scales two TCP flows with different packet size it should respond in
   the same way sizes are required to run
       at equal bit rates under the same congestion excursion, irrespective of the
   size of same path conditions, this should be
       done by altering TCP (Section 4.2.2), not network equipment (the
       latter affects other transports besides TCP).

   2.  If it is desired to improve TCP performance by reducing the packets
       chance that the bytes causing congestion happen to be
   broken down into.

   A bit-congestible queue suffering a SYN or a pure ACK will be dropped, this should be
       done by modifying TCP (Section 4.2.3), not network equipment.

2.4.  Recommendation on Handling Congestion Indications when Splitting
      or Merging Packets

   Packets carrying congestion excursion has to drop indications may be split or mark the same excess bytes whether they are merged in
   some circumstances (e.g. at a few large packets
   (A) RTCP transcoder or many small packets (B).  So for the same congestion excursion, during IP fragment
   reassembly).  Splitting and merging only make sense in the same amount context of bytes have to be shed
   ECN, not loss.

   The general rule to get follow is that the load back to its
   operating point.  But, number of course, for smaller packets (B) more octets in packets will have to
   with congestion indications SHOULD be discarded to shed the same bytes.

   If all equivalent before and after
   merging or splitting.  This is based on the transports interpret each drop/mark as principle used above;
   that an indication of congestion on a single loss event
   irrespective packet can be considered as an
   indication of the size congestion on each octet of the packet dropped, those packet.

   The above rule is not phrased with smaller
   packets (B) will respond more to the same congestion excursion.  On word "MUST" to allow the other hand, if they respond proportionately less when smaller
   packets
   following exception.  There are dropped/marked, overall they will be able cases where pre-existing protocols
   were not designed to respond conserve congestion marked octets (e.g.  IP
   fragment reassembly [RFC3168] or loss statistics in RTCP receiver
   reports [RFC3550] before ECN was added
   [I-D.ietf-avtcore-ecn-for-rtp]).  When any such protocol is updated,
   it SHOULD comply with the
   same above rule to conserved marked octets.
   However, the same congestion excursion.

   Therefore, for a congestion control to scale with packet size rule may be relaxed if it
   should respond would otherwise become too
   complex to dropped or marked bytes (as TFRC-SP [RFC4828]
   effectively does), instead interoperate with pre-existing implementations of dropped the
   protocol.

   One can think of a splitting or merging process as if all the
   incoming congestion-marked octets increment a counter and all the
   outgoing marked packets (as TCP
   does).

3.2.  Transport-Independent Network

   TCP congestion control ensures that flows competing for octets decrement the same
   resource each maintain counter.  In order to
   ensure that congestion indications remain timely, even the same number of segments smallest
   positive remainder in flight,
   irrespective of segment size.  So under similar conditions, flows
   with different segment sizes will get different bit rates.

   Even though reducing the drop probability of small packets (e.g.
   RED's byte-mode drop) helps ensure TCPs with different packet sizes
   will achieve similar bit rates, we argue this correction conceptual counter should be
   made trigger the next
   outgoing packet to any future transport protocols based on TCP, not be marked (causing the counter to go negative).

3.  Motivating Arguments

   In this section, we justify the
   network recommendations given in order the previous
   section.

3.1.  Avoiding Perverse Incentives to fix one transport, no matter how prominent (Ab)use Smaller Packets

   Increasingly, it is.
   Effectively, favouring small packets is reverse engineering of
   network equipment around one particular transport protocol (TCP),
   contrary to the excellent advice in [RFC3426], which asks designers
   to question "Why are you proposing a solution at this layer of the
   protocol stack, rather than at another layer?"

   RFC2309 refers to an email [pktByteEmail] for advice on how RED
   should allow for different packet sizes.  The email says the question
   of whether a packet's own size should affect its drop probability
   "depends on the dominant end-to-end congestion control mechanisms".
   But we argue network equipment should not be specialised for whatever
   transport is predominant.  No matter how convenient it is, we SHOULD
   NOT hack the network solely to allow for omissions from the design of
   one transport protocol, even if it is as predominant as TCP.

3.3.  Avoiding Perverse Incentives to (Ab)use Smaller Packets

   Increasingly, it is being recognised that a being recognised that a protocol design must take
   care not to cause unintended consequences by giving the parties in
   the protocol exchange perverse incentives [Evol_cc][RFC3426].  Again,
   imagine  Given
   there are many good reasons why larger path max transmission units
   (PMTUs) would help solve a number of scaling issues, we do not want
   to create any bias against large packets that is greater than their
   true cost.

   Imagine a scenario where the same bit rate of packets will contribute
   the same to bit-congestion of a link irrespective of whether it is
   sent as fewer larger packets or more smaller packets.  A protocol
   design that caused larger packets to be more likely to be dropped
   than smaller ones would be dangerous in this case:

   Malicious transports:  A queue that gives an advantage to small
      packets can be used to amplify the force of a flooding attack.  By
      sending a flood of small packets, the attacker can get the queue
      to discard more traffic in large packets, allowing more attack
      traffic to get through to cause further damage.  Such a queue
      allows attack traffic to have a disproportionately large effect on
      regular traffic without the attacker having to do much work.

   Non-malicious transports:  Even if a transport designer is not
      actually malicious, if over time it finds is noticed that small packets
      tend to go faster, over time it designers will
      tend to act in its their own interest and
      use them. smaller packets.  Queues that give advantage to small packets
      create an evolutionary pressure for transports to send at the same
      bit-rate but break their data stream down into tiny segments to
      reduce their drop rate.  Encouraging a high volume of tiny packets
      might in turn unnecessarily overload a completely unrelated part
      of the system, perhaps more limited by header-processing than
      bandwidth.

   Imagine two unresponsive flows arrive at a bit-congestible
   transmission link each with the same bit rate, say 1Mbps, but one
   consists of 1500B and the other 60B packets, which are 25x smaller.
   Consider a scenario where gentle RED [gentle_RED] is used, along with
   the variant of RED we advise against, i.e. where the RED algorithm is
   configured to adjust the drop probability of packets in proportion to
   each packet's size (byte mode packet drop).  In this case, RED aims
   to drop 25x more of the larger packets than the smaller ones.  Thus,
   for example if RED drops 25% of the larger packets, it will aim to
   drop 1% of the smaller packets (but in practice it may drop more as
   congestion increases [RFC4828; S.B.4]). Appx B.4]).  Even though both flows
   arrive with the same bit rate, the bit rate the RED queue aims to
   pass to the line will be 750Kbit 750kbps for the flow of larger packet packets but
   990Kbit
   990kbps for the smaller packets (but because (because of rate variation variations it will
   actually be a little less than this target).

   Note that, although the byte-mode drop variant of RED amplifies small
   packet attacks, drop-tail queues amplify small packet attacks even
   more (see Security Considerations in Section 6).  Wherever possible
   neither should be used.

3.4.

3.2.  Small != Control

   Dropping fewer control packets considerably improves performance.  It
   is tempting to drop small packets with lower probability in order to
   improve performance, because many control packets are small (TCP SYNs
   & ACKs, DNS queries & responses, SIP messages, HTTP GETs, etc) and
   dropping fewer control packets considerably improves performance. etc).
   However, we must not give control packets preference purely by virtue
   of their smallness, otherwise it is too easy for any data source to
   get the same preferential treatment simply by sending data in smaller
   packets.  Again we should not create perverse incentives to favour
   small packets rather than to favour control packets, which is what we
   intend.

   Just because many control packets are small does not mean all small
   packets are control packets.

   So again,

   So, rather than fix these problems in the network, we argue that the
   transport should be made more robust against losses of control
   packets (see 'Making Transports Robust against Control Packet Losses'
   in Section 4.2.3).

3.5.  Implementation Efficiency

   Allowing

3.3.  Transport-Independent Network

   TCP congestion control ensures that flows competing for packet size at the transport rather than in same
   resource each maintain the network
   ensures that neither same number of segments in flight,
   irrespective of segment size.  So under similar conditions, flows
   with different segment sizes will get different bit-rates.

   One motivation for the network nor the transport needs to do a
   multiply operation--multiplication biasing congestion notification by
   packet size is effectively
   achieved as a repeated add when the transport adds to its count of
   marked bytes as each congestion event is fed counter this effect and try to it.  This isn't a
   principled reason in itself, but it is a happy consequence equalise the bit-
   rates of flows with different packet sizes.  However, in order to do
   this, the
   other principled reasons.

3.6.  Why now?

   Now is a good time queuing algorithm has to discuss whether fairness between different
   sized packets would best be implemented make assumptions about the
   transport, which become embedded in the network.  Specifically:

   o  The queuing algorithm has to assume how aggressively the transport
      will respond to congestion (see Section 4.2.4).  If the network equipment, or at
      assumes the transport, transport responds as aggressively as TCP NewReno, it
      will be wrong for a number of reasons:

   1.  The IETF pre-congestion notification (PCN) working group is
       standardising Compound TCP and differently wrong for Cubic
      TCP, etc.  To achieve equal bit-rates, each transport then has to
      guess what assumption the external behaviour of a PCN network made, and work out how to
      replace this assumed aggressiveness with its own aggressiveness.

   o  Also, if the network biases congestion notification (AQM) algorithm [RFC5670];

   2.  [RFC2309] says RED may either take account of by packet size or not
       when dropping, but gives no recommendation between the two,
       referring instead
      it has to advice on assume a baseline packet size--all proposed algorithms
      use the performance implications in an
       email [pktByteEmail], local MTU.  Then transports have to guess which recommends byte-mode drop.  Further,
       just before RFC2309 was issued, an addendum link was added
      congested and what its local MTU was, in order to know how to
      tailor their congestion response to the
       archived email that revisited link.

   Even though reducing the issue drop probability of packet vs. small packets (e.g.
   RED's byte-mode
       drop in its last paragraph, making the recommendation less clear-
       cut.  RFC2309 is currently the only advice in the RFC series on drop) helps ensure TCP flows with different packet size bias in AQM algorithms;

   3.  The IRTF Internet Congestion Control Research Group (ICCRG)
       recently took on the challenge of building consensus on what
       common congestion control support
   sizes will achieve similar bit rates, we argue this correction should
   be required from network
       forwarding functions in made to any future [RFC6077].  The wider Internet
       community needs transport protocols based on TCP, not to discuss whether the complexity of adjusting
       for packet size should be in the
   network or in transports;

   4.  Given there are many good reasons why larger path max
       transmission units (PMTUs) would help solve a number of scaling
       issues, we don't want order to create any bias against large fix one transport, no matter how predominant it
   is.  Effectively, favouring small packets
       that is greater than their true cost;

   5.  The IETF audio/video reverse engineering of
   network equipment around one particular transport (AVT) working group is
       standardising how the real-time protocol (RTP) should feedback
       and respond to explicit congestion notification (ECN)
       [I-D.ietf-avt-ecn-for-rtp].

   6.  The IETF has started (TCP),
   contrary to consider the excellent advice in [RFC3426], which asks designers
   to question "Why are you proposing a solution at this layer of fairness between
       flows that use different packet sizes (e.g. in the small-packet
       variant of TCP-friendly rate control, TFRC-SP [RFC4828]).  Given
       transports with different packet sizes,
   protocol stack, rather than at another layer?"

   In contrast, if we don't decide
       whether the network or the transport should allow for never takes account of packet size, the
   transport can be certain it will be hard if not impossible never need to design guess any transport
       protocol so that its bit-rate relative to other transports meets
       design guidelines [RFC5033] (Note however that, if assumptions
   the concern
       were fairness between users, rather than between flows
       [Rate_fair_Dis], relative rates between flows would have network has made.  And the network passes two pieces of
   information to come
       under run-time control rather than being embedded the transport that are sufficient in protocol
       designs).

4.  A Survey and Critique of Past Advice

   The original 1993 paper all cases: i)
   congestion notification on RED [RED93] proposed two options for the
   RED active queue management algorithm: packet mode and byte mode.
   Packet mode measured the queue length in packets and dropped (or
   marked) individual packets with a probability independent of their
   size.  Byte mode measured ii) the queue length in bytes and marked an
   individual packet with probability in proportion to its size
   (relative to of the maximum packet size).  In packet.
   Both are available for the paper's outline transport to combine (by taking account of
   further work, it was stated that no recommendation had been made on
   whether the queue
   packet size should be measured in bytes when responding to congestion) or packets, but
   noted not.  Appendix B checks
   that the difference could be significant.

   When RED was recommended for general deployment in 1998 [RFC2309],
   the these two modes were mentioned implying the choice between them was a
   question pieces of performance, referring to a 1997 email [pktByteEmail] information are sufficient for
   advice on tuning.  A later addendum to this email introduced all relevant
   scenarios.

   When the
   insight that there are in fact two orthogonal choices:

   o  whether network does not take account of packet size, it allows
   transport protocols to measure queue length in bytes or packets (Section 4.1)

   o choose whether the drop probability to take account of an individual packet should depend
      on its own size (Section 4.2).

   The rest
   or not.  However, if the network were to bias congestion notification
   by packet size, transport protocols would have no choice; those that
   did not take account of packet size themselves would unwittingly
   become dependent on packet size, and those that already took account
   of packet size would end up taking account of it twice.

3.4.  Scaling Congestion Control with Packet Size

   Having so far justified only our recommendations for the network,
   this section is structured accordingly.

4.1.  Congestion Measurement Advice

   The choice of which metric to use focuses on the host.  We construct a scaling argument to measure queue length was left
   open in RFC2309.  It is now well understood
   justify the recommendation that queues for bit-
   congestible resources should be measured in bytes, and queues for
   packet-congestible resources a host should be measured in packets
   [pktByteEmail].

   Some modern queue implementations give respond to a choice for setting RED's
   thresholds in byte-mode dropped or packet-mode.  This may merely be an
   administrator-interface preference,
   marked packet in proportion to its size, not altering how just as a single
   congestion event.

   The argument assumes that we have already sufficiently justified our
   recommendation that the queue itself
   is measured but on some hardware it does actually change network should not take account of packet
   size.

   Also, we assume bit-congestible links are the way predominant source of
   congestion.  As the Internet stands, it
   measures its queue.  Whether a resource is hard if not impossible to
   know whether congestion notification is from a bit-congestible or packet-
   congestible is a property of the resource,
   packet-congestible resource (see Appendix B.2) so an admin should not
   ever need to, or be able to, configure we have to assume
   the way a queue measures
   itself.

   NOTE: Congestion in some legacy bit-congestible buffers most prevalent case (see Section 1.1).  If this assumption is only
   measured in packets not bytes.  In such cases, the operator has to
   set the thresholds mindful of a typical mix of packets sizes.  Any
   AQM algorithm on such a buffer will be oversensitive to high
   proportions of small packets, e.g. a DoS attack,
   wrong, and undersensitive particular congestion indications are actually due to high proportions
   overload of large packets.  However, packet-processing, there is no need to
   make allowances for the possibility issue of such legacy safety at stake.
   Any congestion control that triggers a multiplicative decrease in future protocol
   design.  This is safe because any undersensitivity during unusual
   traffic mixes cannot lead
   response to a congestion collapse given the buffer indication will eventually revert bring packet processing back
   to tail drop, discarding proportionately more
   large packets.

4.1.1.  Fixed Size Packet Buffers

   Although its operating point just as quickly.  The only issue at stake is
   that the question of whether resource could be utilised more efficiently if packet-
   congestion could be separately identified.

   Imagine a bit-congestible link shared by many flows, so that each
   busy period tends to measure queues in bytes or cause packets is fairly well understood these days, measuring congestion is
   not straightforward when to be lost from different flows.
   Consider further two sources that have the resource is bit congestible same data rate but break
   the
   queue is packet congestible or vice versa.  This section outlines load into large packets in one application (A) and small packets
   in the
   approach to take.  There other (B).  Of course, because the load is no controversy over what should be done,
   you just need to the same, there
   will be expert in probability to work it out.  And, even
   if you know what should be done, it's not always easy to find a
   practical algorithm to implement it.

   Some, mostly older, queuing hardware sets aside fixed sized buffers
   in which to store each packet proportionately more packets in the queue.  Also, with some
   hardware, any fixed sized buffers not completely filled by a small packet
   are padded when transmitted to the wire. flow (B).

   If we imagine a theoretical
   forwarding system congestion control scales with both queuing and transmission in fixed, MTU-
   sized units, packet size it should clearly be treated as packet-congestible,
   because the queue length respond in
   the same way to the same congestion notification, irrespective of the
   size of the packets would that the bytes causing congestion happen to be a good model of
   broken down into.

   A bit-congestible queue suffering congestion of has to drop or mark the lower layer link.

   If we now imagine
   same excess bytes whether they are in a hybrid forwarding system with transmission delay
   largely dependent on few large packets (A) or many
   small packets (B).  So for the byte-size same amount of packets but buffers congestion overload,
   the same amount of one MTU
   per packet, it should strictly require a more complex algorithm bytes has to
   determine be shed to get the probability load back to its
   operating point.  But, of congestion.  It should course, for smaller packets (B) more
   packets will have to be treated as two
   resources in sequence, where discarded to shed the sum of same bytes.

   If both the byte-sizes transports interpret each drop/mark as a single loss
   event irrespective of the packets
   within each packet buffer models congestion size of the line while packet dropped, the
   length flow of the queue in
   smaller packets models congestion of the queue.  Then (B) will respond more times to the probability of congesting same congestion.
   On the forwarding buffer would be other hand, if a
   conditional probability--conditional on the previously calculated
   probability of congesting the line.

   In systems that use fixed size buffers, transport responds proportionately less when
   smaller packets are dropped/marked, overall it is unusual for all will be able to
   respond the
   buffers used by an interface same to be the same size.  Typically pools amount of
   different sized buffers are provided (Cisco uses the term 'buffer
   carving' congestion.

   Therefore, for the process of dividing up memory into these pools
   [IOSArch]).  Usually, if the pool a congestion control to scale with packet size it
   should respond to dropped or marked bytes (as TFRC-SP [RFC4828]
   effectively does), instead of small buffers is exhausted,
   arriving small dropped or marked packets can borrow space in (as TCP
   does).

   For the pool avoidance of large buffers,
   but doubt, this is not vice versa.  However, a recommendation that TCP
   should be changed so that it scales with packet size.  It is easier to work out what a
   recommendation that any future transport protocol proposal should be
   done
   respond to dropped or marked bytes if we temporarily set aside the possibility of such borrowing.
   Then, with fixed pools of buffers it wishes to claim that it is
   scalable.

3.5.  Implementation Efficiency

   Allowing for different sized packets and no
   borrowing, the packet size of each pool and at the current queue length in each
   pool would both be measured transport rather than in packets.  So an AQM algorithm would
   have to maintain the queue length for each pool, and judge whether to
   drop/mark network
   ensures that neither the network nor the transport needs to do a
   multiply operation--multiplication by packet size is effectively
   achieved as a repeated add when the transport adds to its count of
   marked bytes as each congestion event is fed to it.  This isn't a particular size by looking at
   principled reason in itself, but it is a happy consequence of the pool for
   packets
   other principled reasons.

4.  A Survey and Critique of that size Past Advice

   This section is informative, not normative.

   The original 1993 paper on RED [RED93] proposed two options for the
   RED active queue management algorithm: packet mode and using byte mode.
   Packet mode measured the queue length (in packets) in packets and dropped (or
   marked) individual packets with a probability independent of their
   size.  Byte mode measured the queue length in bytes and marked an
   individual packet with probability in proportion to its queue.

   We now return size
   (relative to the issue we temporarily set aside: small packets
   borrowing space in larger buffers. maximum packet size).  In this case, the only difference
   is paper's outline of
   further work, it was stated that no recommendation had been made on
   whether the pools for smaller packets have a maximum queue size should be measured in bytes or packets, but
   noted that
   includes all the pools for larger packets.  And every time a packet
   takes a larger buffer, the current queue size has to difference could be incremented significant.

   When RED was recommended for all queues general deployment in 1998 [RFC2309],
   the pools of buffers less than or equal to two modes were mentioned implying the
   buffer size used.

   We will return to borrowing choice between them was a
   question of fixed sized buffers when we discuss
   biasing performance, referring to a 1997 email [pktByteEmail] for
   advice on tuning.  A later addendum to this email introduced the drop/marking
   insight that there are in fact two orthogonal choices:

   o  whether to measure queue length in bytes or packets (Section 4.1)

   o  whether the drop probability of a specific an individual packet because of should depend
      on its own size in Section 4.2.1.  But here we can give a at least one
   simple rule for how to measure the length (Section 4.2).

   The rest of queues this section is structured accordingly.

4.1.  Congestion Measurement Advice

   The choice of fixed buffers:
   no matter how complicated the scheme is, ultimately any fixed buffer
   system will need which metric to use to measure its queue length was left
   open in packets not bytes.

4.1.2.  Congestion Measurement without a Queue

   AQM algorithms are nearly always described assuming there RFC2309.  It is a queue now well understood that queues for a congested resource bit-
   congestible resources should be measured in bytes, and the algorithm can use queues for
   packet-congestible resources should be measured in packets
   [pktByteEmail].

   Some modern queue implementations give a choice for setting RED's
   thresholds in byte-mode or packet-mode.  This may merely be an
   administrator-interface preference, not altering how the queue length
   to determine itself
   is measured but on some hardware it does actually change the probability that way it will drop or mark each packet.
   But not all congested resources lead to queues.  For instance,
   wireless spectrum
   measures its queue.  Whether a resource is bit-congestible (for or packet-
   congestible is a given coding scheme),
   because interference increases with property of the rate at which bits are
   transmitted.  But wireless link protocols do resource, so an admin should not always maintain
   ever need to, or be able to, configure the way a queue that depends on spectrum interference.  Similarly, power
   limited resources are also usually measures
   itself.

   NOTE: Congestion in some legacy bit-congestible if energy is
   primarily required for transmission rather than header processing,
   but it buffers is rare for a link protocol only
   measured in packets not bytes.  In such cases, the operator has to build
   set the thresholds mindful of a queue as it approaches
   maximum power.

   Nonetheless, typical mix of packets sizes.  Any
   AQM algorithms do not require algorithm on such a queue in order to work.
   For instance spectrum congestion can buffer will be modelled by signal quality
   using target bit-energy-to-noise-density ratio.  And, oversensitive to model radio
   power exhaustion, transmission power levels can be measured high
   proportions of small packets, e.g. a DoS attack, and
   compared undersensitive
   to high proportions of large packets.  However, there is no need to
   make allowances for the maximum power available.  [ECNFixedWireless] proposes
   a practical and theoretically sound way possibility of such legacy in future protocol
   design.  This is safe because any undersensitivity during unusual
   traffic mixes cannot lead to combine congestion
   notification for different bit-congestible resources at different
   layers along an end collapse given the buffer
   will eventually revert to end path, whether wireless or wired, and
   whether with or without queues.

4.2.  Congestion Notification Advice

4.2.1.  Network Bias when Encoding tail drop, discarding proportionately more
   large packets.

4.1.1.  Fixed Size Packet Buffers

   The previously mentioned email [pktByteEmail] referred question of whether to by
   [RFC2309] advised that most scarce resources measure queues in the Internet were
   bit-congestible, which is still believed bytes or packets seems
   to be true (Section 1.1).
   But it went on to give advice we now disagree with.  It said that
   drop probability should depend on well understood.  However, measuring congestion is not
   straightforward when the size of resource is bit congestible but the queue is
   packet being
   considered for drop if congestible or vice versa.  This section outlines the resource approach
   to take.  There is bit-congestible, but not if no controversy over what should be done, you just
   need to be expert in probability to work it
   is packet-congestible.  The argument continued that out.  And, even if you
   know what should be done, it's not always easy to find a practical
   algorithm to implement it.

   Some, mostly older, queuing hardware sets aside fixed sized buffers
   in which to store each packet drops
   were inflated in the queue.  Also, with some
   hardware, any fixed sized buffers not completely filled by a packet size (byte-mode dropping), "a flow's fraction
   of
   are padded when transmitted to the packet drops is then wire.  If we imagine a theoretical
   forwarding system with both queuing and transmission in fixed, MTU-
   sized units, it should clearly be treated as packet-congestible,
   because the queue length in packets would be a good indication model of that flow's fraction
   congestion of the link bandwidth in bits per second".  This was consistent with lower layer link.

   If we now imagine a referenced policing mechanism being worked hybrid forwarding system with transmission delay
   largely dependent on at the time for
   detecting unusually high bandwidth flows, eventually published in
   1999 [pBox].  However, the problem could and should have been solved
   by making the policing mechanism count the volume byte-size of bytes randomly
   dropped, not the number packets but buffers of packets.

   A few months before RFC2309 was published, an addendum was added one MTU
   per packet, it should strictly require a more complex algorithm to
   determine the above archived email referenced from the RFC, in which the final
   paragraph seemed to partially retract what had previously been said. probability of congestion.  It clarified that should be treated as two
   resources in sequence, where the question sum of whether the probability byte-sizes of
   dropping/marking a the packets
   within each packet should depend on its size was not related
   to whether buffer models congestion of the resource itself was bit congestible, but a completely
   orthogonal question.  However line while the only example given had
   length of the queue
   measured in packets but packet drop depended on the byte-size models congestion of the
   packet in question.  No example was given queue.  Then
   the other way round.

   In 2000, Cnodder et al [REDbyte] pointed out that there was an error
   in probability of congesting the part forwarding buffer would be a
   conditional probability--conditional on the previously calculated
   probability of congesting the original 1993 RED algorithm line.

   In systems that aimed to
   distribute drops uniformly, because use fixed size buffers, it didn't correctly take into
   account the adjustment is unusual for packet size.  They recommended all the
   buffers used by an
   algorithm called RED_4 to fix this.  But they also recommended a
   further change, RED_5, interface to adjust drop rate dependent on be the square of
   relative packet same size.  This was indeed consistent with one implied
   motivation behind RED's byte mode drop--that we should reverse
   engineer the network to improve the performance of dominant end-to-
   end congestion control mechanisms.  But it is not consistent with the
   present recommendations  Typically pools of Section 2.

   By 2003, a further change had been made to
   different sized buffers are provided (Cisco uses the adjustment term 'buffer
   carving' for packet
   size, this time in the RED algorithm process of dividing up memory into these pools
   [IOSArch]).  Usually, if the ns2 simulator.  Instead pool of taking each packet's size relative to a `maximum packet size' small buffers is exhausted,
   arriving small packets can borrow space in the pool of large buffers,
   but not vice versa.  However, it
   was taken relative to a `mean packet size', intended is easier to work out what should be a static
   value representative
   done if we temporarily set aside the possibility of such borrowing.
   Then, with fixed pools of buffers for different sized packets and no
   borrowing, the `typical' packet size on of each pool and the link.  We current queue length in each
   pool would both be measured in packets.  So an AQM algorithm would
   have not been able to find a justification in maintain the literature queue length for this
   change, however Eddy each pool, and Allman conducted experiments [REDbias] that
   assessed how sensitive RED was to this parameter, amongst other
   things.  No-one seems to have pointed out that this changed algorithm
   can often lead judge whether to drop probabilities
   drop/mark a packet of greater than 1 (which should
   ring alarm bells hinting that there's a mistake in particular size by looking at the theory
   somewhere).

   On 10-Nov-2004, this variant pool for
   packets of byte-mode packet drop was made the
   default in that size and using the ns2 simulator.  None length (in packets) of the responses its queue.

   We now return to our
   admittedly limited survey of implementers (Section 4.2.5) found any
   variant of byte-mode drop had been implemented.  Therefore any
   conclusions based on ns2 simulations that use RED without disabling
   byte-mode drop are likely to be highly questionable.

   The byte-mode drop variant of RED is, of course, not the only
   possible bias towards issue we temporarily set aside: small packets
   borrowing space in queueing systems.  We have
   already mentioned that tail-drop queues naturally tend to lock-out
   large packets once they are full.  But also queues with fixed sized
   buffers reduce larger buffers.  In this case, the probability that small packets will be dropped if
   (and only if) they allow small packets to borrow buffers from difference
   is that the pools for larger packets.  As was explained in Section 4.1.1 on fixed
   size buffer carving, borrowing effectively makes the smaller packets have a maximum queue size for small packets greater than that
   includes all the pools for large packets, because
   more buffers can be used by small packets while less will fit large larger packets.

   In itself,  And every time a packet
   takes a larger buffer, the bias towards small packets caused by buffer borrowing
   is perfectly correct.  Lower drop probability current queue size has to be incremented
   for small packets is
   legitimate all queues in buffer borrowing schemes, because small packets
   genuinely congest the machine's buffer memory pools of buffers less than large
   packets, given they can fit in more spaces.  The bias towards small
   packets is not artificially added (as it is in RED's byte-mode drop
   algorithm), it merely reflects or equal to the reality
   buffer size used.

   We will return to borrowing of the way fixed buffer
   memory gets congested.  Incidentally, the bias towards small packets
   from buffer borrowing is nothing like as large as that sized buffers when we discuss
   biasing the drop/marking probability of RED's byte-
   mode drop.

   Nonetheless, fixed-buffer memory with tail drop is still prone to
   lock-out large packets, purely a specific packet because of the tail-drop aspect.  So
   its size in Section 4.2.1.  But here we can give a
   good AQM algorithm like RED with packet-mode drop should be used with at least one
   simple rule for how to measure the length of queues of fixed buffer memories where possible.  If RED is too buffers:
   no matter how complicated to
   implement with multiple the scheme is, ultimately any fixed buffer pools, the minimum necessary to
   prevent large packet lock-out is
   system will need to ensure smaller packets never use
   the last available buffer measure its queue length in any of the pools for larger packets.

4.2.2.  Transport Bias when Decoding

   The above proposals to alter the network equipment to bias towards
   smaller packets have largely carried on outside the IETF process
   (unless one counts not bytes.

4.1.2.  Congestion Measurement without a reference in an informational RFC to an archived
   email!).  Whereas, within the IETF, there Queue

   AQM algorithms are many different
   proposals to alter transport protocols to achieve nearly always described assuming there is a queue
   for a congested resource and the same goals,
   i.e. either algorithm can use the queue length
   to make determine the flow bit-rate take account of packet size, probability that it will drop or mark each packet.
   But not all congested resources lead to protect control packets from loss.  This memo argues that altering
   transport protocols is the more principled approach.

   A recently approved experimental RFC adapts its transport layer
   protocol to take account of packet sizes relative to typical TCP
   packet sizes.  This proposes queues.  For instance,
   wireless spectrum is usually regarded as bit-congestible (for a new small-packet variant of TCP-
   friendly rate control [RFC3448] called TFRC-SP [RFC4828].
   Essentially, it proposes given
   coding scheme).  But wireless link protocols do not always maintain a rate equation
   queue that inflates the flow rate
   by the ratio of a typical TCP segment size (1500B including TCP
   header) over the actual segment size [PktSizeEquCC].  (There depends on spectrum interference.  Similarly, power
   limited resources are also
   other important differences of detail relative usually bit-congestible if energy is
   primarily required for transmission rather than header processing,
   but it is rare for a link protocol to TFRC, such build a queue as using
   virtual packets [CCvarPktSize] it approaches
   maximum power.

   Nonetheless, AQM algorithms do not require a queue in order to avoid responding work.
   For instance spectrum congestion can be modelled by signal quality
   using target bit-energy-to-noise-density ratio.  And, to multiple losses
   per round trip model radio
   power exhaustion, transmission power levels can be measured and using a minimum inter-packet interval.)

   Section 4.5.1 of this TFRC-SP spec discusses
   compared to the implications of
   operating in an environment where queues have been configured maximum power available.  [ECNFixedWireless] proposes
   a practical and theoretically sound way to drop
   smaller packets with proportionately lower probability than larger
   ones.  But it only discusses TCP operating in such combine congestion
   notification for different bit-congestible resources at different
   layers along an environment,
   only mentioning TFRC-SP briefly when discussing how end to define
   fairness with TCP.  And it only discusses the byte-mode dropping
   version of end path, whether wireless or wired, and
   whether with or without queues.

4.2.  Congestion Notification Advice

4.2.1.  Network Bias when Encoding

4.2.1.1.  Advice on Packet Size Bias in RED as it was before Cnodder et al pointed out it didn't
   sufficiently bias towards small packets

   The previously mentioned email [pktByteEmail] referred to make TCP independent of
   packet size.

   So the TFRC-SP spec doesn't address by
   [RFC2309] advised that most scarce resources in the issue of Internet were
   bit-congestible, which of is still believed to be true (Section 1.1).
   But it went on to offer advice that is updated by this memo.  It said
   that drop probability should depend on the network
   or size of the transport _should_ handle fairness between different packet
   sizes.  In its Appendix B.4 it discusses being
   considered for drop if the possibility resource is bit-congestible, but not if it
   is packet-congestible.  The argument continued that if packet drops
   were inflated by packet size (byte-mode dropping), "a flow's fraction
   of both
   TFRC-SP and some network buffers duplicating each other's attempts to
   deliberately bias towards small packets.  But the discussion packet drops is not
   conclusive, instead reporting simulations then a good indication of many that flow's fraction
   of the
   possibilities link bandwidth in order to assess performance but not recommending any
   particular course of action.

   The paper originally proposing TFRC bits per second".  This was consistent with virtual packets (VP-TFRC)
   [CCvarPktSize] proposed that there should perhaps be two variants to
   cater
   a referenced policing mechanism being worked on at the time for
   detecting unusually high bandwidth flows, eventually published in
   1999 [pBox].  However, the different variants problem could and should have been solved
   by making the policing mechanism count the volume of RED.  However, as bytes randomly
   dropped, not the TFRC-SP
   authors point out, there is no way for a transport to know whether
   some queues on its path have deployed RED with byte-mode packet drop
   (except if number of packets.

   A few months before RFC2309 was published, an exhaustive survey found that no-one has deployed it!--
   see Section 4.2.4).  Incidentally, VP-TFRC also proposed that byte-
   mode RED dropping should really square addendum was added to
   the packet size compensation
   factor (like above archived email referenced from the RFC, in which the final
   paragraph seemed to partially retract what had previously been said.
   It clarified that the question of Cnodder's RED_5, but apparently unaware whether the probability of it).

   Pre-congestion notification [RFC5670] is
   dropping/marking a proposal packet should depend on its size was not related
   to use whether the resource itself was bit congestible, but a virtual completely
   orthogonal question.  However the only example given had the queue for AQM marking for packets within one Diffserv class
   measured in order
   to give early warning prior to any real queuing.  The proposed PCN
   marking algorithms have been designed not to take account of packets but packet
   size when forwarding through queues.  Instead drop depended on the general principle
   has been to take account byte-size of the sizes of marked packets when
   monitoring
   packet in question.  No example was given the fraction of marking at other way round.

   In 2000, Cnodder et al [REDbyte] pointed out that there was an error
   in the edge part of the network, as
   recommended here.

4.2.3.  Making Transports Robust against Control Packet Losses

   Recently, two RFCs have defined changes to TCP original 1993 RED algorithm that make aimed to
   distribute drops uniformly, because it more
   robust against losing small control packets [RFC5562] [RFC5690].  In
   both cases they note that didn't correctly take into
   account the case adjustment for these two TCP changes would be
   weaker if RED were biased against dropping small packets.  We argue
   here that these two proposals are a safer and more principled way packet size.  They recommended an
   algorithm called RED_4 to
   achieve TCP performance improvements than reverse engineering RED fix this.  But they also recommended a
   further change, RED_5, to
   benefit TCP.

   Although no proposals exist as far as adjust drop rate dependent on the square of
   relative packet size.  This was indeed consistent with one implied
   motivation behind RED's byte mode drop--that we know, it would also be
   possible and perfectly valid should reverse
   engineer the network to make improve the performance of dominant end-to-
   end congestion control packets robust against
   drop by explicitly requesting mechanisms.  This memo makes a lower drop probability using their
   Diffserv code point [RFC2474] to request different
   recommendations in Section 2.

   By 2003, a scheduling class with
   lower drop.

   Although not brought further change had been made to the IETF, a simple proposal from Wischik
   [DupTCP] suggests that adjustment for packet
   size, this time in the first three packets RED algorithm of every TCP flow
   should be routinely duplicated after a short delay.  It shows that
   this would greatly improve the chances ns2 simulator.  Instead
   of short flows completing
   quickly, but taking each packet's size relative to a `maximum packet size' it would hardly increase traffic levels
   was taken relative to a `mean packet size', intended to be a static
   value representative of the `typical' packet size on the Internet,
   because Internet bytes link.  We
   have always not been concentrated able to find a justification in the large
   flows.  It further shows literature for this
   change, however Eddy and Allman conducted experiments [REDbias] that
   assessed how sensitive RED was to this parameter, amongst other
   things.  However, this changed algorithm can often lead to drop
   probabilities of greater than 1 (which gives a hint that there is
   probably a mistake in the performance theory somewhere).

   On 10-Nov-2004, this variant of many typical
   applications depends byte-mode packet drop was made the
   default in the ns2 simulator.  It seems unlikely that byte-mode drop
   has ever been implemented in production networks (Appendix A),
   therefore any conclusions based on completion ns2 simulations that use RED
   without disabling byte-mode drop are likely to behave very
   differently from RED in production networks.

4.2.1.2.  Packet Size Bias Regardless of long serial chains RED

   The byte-mode drop variant of short
   messages.  It argues that, given most RED is, of course, not the value people get only
   possible bias towards small packets in queueing systems.  We have
   already mentioned that tail-drop queues naturally tend to lock-out
   large packets once they are full.  But also queues with fixed sized
   buffers reduce the probability that small packets will be dropped if
   (and only if) they allow small packets to borrow buffers from the Internet is concentrated within short flows, this simple
   expedient would greatly increase
   pools for larger packets.  As was explained in Section 4.1.1 on fixed
   size buffer carving, borrowing effectively makes the value of the best efforts
   Internet at minimal cost.

4.2.4.  Congestion Notification: Summary of Conflicting Advice

   +-----------+----------------+-----------------+--------------------+
   | transport |  RED_1 (packet |  RED_4 (linear  | RED_5 (square byte |
   |        cc |   mode drop)   | byte mode drop) |     mode drop)     |
   +-----------+----------------+-----------------+--------------------+
   |    TCP or |    s/sqrt(p)   |    sqrt(s/p)    |      1/sqrt(p)     |
   |      TFRC |                |                 |                    |
   |   TFRC-SP |    1/sqrt(p)   |    1/sqrt(sp)   |    1/(s.sqrt(p))   |
   +-----------+----------------+-----------------+--------------------+

     Table 1: Dependence of flow bit-rate per RTT on packet maximum queue
   size s and
   drop rate p when network and/or transport for small packets greater than that for large packets, because
   more buffers can be used by small packets while less will fit large
   packets.

   In itself, the bias towards small packets
                            to varying degrees

   Table 1 aims to summarise the potential effects of all caused by buffer borrowing
   is perfectly correct.  Lower drop probability for small packets is
   legitimate in buffer borrowing schemes, because small packets
   genuinely congest the advice
   from different sources.  Each column shows a different possible AQM
   behaviour machine's buffer memory less than large
   packets, given they can fit in different queues more spaces.  The bias towards small
   packets is not artificially added (as it is in RED's byte-mode drop
   algorithm), it merely reflects the network, using the terminology reality of Cnodder et al outlined earlier (RED_1 is basic RED with packet-
   mode drop).  Each row shows a different transport behaviour: TCP
   [RFC5681] and TFRC [RFC3448] on the top row with TFRC-SP [RFC4828]
   below.

   Let us assume that way fixed buffer
   memory gets congested.  Incidentally, the goal bias towards small packets
   from buffer borrowing is for the bit-rate nothing like as large as that of a flow RED's byte-
   mode drop.

   Nonetheless, fixed-buffer memory with tail drop is still prone to be
   independent
   lock-out large packets, purely because of packet size.  Suppressing all inessential details, the
   table shows that this tail-drop aspect.  So a
   good AQM algorithm like RED with packet-mode drop should either be achievable by not altering the
   TCP transport in a RED_5 network, or using the small packet TFRC-SP
   transport (or similar) in a network without any byte-mode dropping used with
   fixed buffer memories where possible.  If RED (top right and bottom left).  Top left is too complicated to
   implement with multiple fixed buffer pools, the `do nothing'
   scenario, while bottom right minimum necessary to
   prevent large packet lock-out is to ensure smaller packets never use
   the `do-both' scenario last available buffer in which bit-
   rate would become far too biased towards small packets.  Of course,
   if any form of byte-mode dropping RED has been deployed on a subset of queues that congest, each path through the network will present a
   different hybrid scenario pools for larger packets.

4.2.2.  Transport Bias when Decoding

   The above proposals to its transport.

   Whatever, we can see that the linear byte-mode drop column in the
   middle considerably complicates the Internet.  It's a half-way house
   that doesn't bias enough towards small packets even if one believes
   the network should be doing alter the biasing.  Section 2 recommends that
   _all_ bias in network equipment to bias towards small
   smaller packets should be
   turned off--if indeed any equipment vendors have implemented it--
   leaving packet size bias solely as the preserve of largely carried on outside the transport
   layer (solely IETF process.
   Whereas, within the leftmost, packet-mode drop column).

4.2.5.  RED Implementation Status

   A survey has been conducted of 84 vendors IETF, there are many different proposals to assess how widely drop
   probability based on packet size has been implemented in RED.  Prior alter
   transport protocols to achieve the survey, an individual approach same goals, i.e. either to Cisco received confirmation
   that, having checked make
   the code-base for each flow bit-rate take account of the product ranges,
   Cisco has not implemented any discrimination based on packet size in
   any AQM algorithm in any of size, or to protect control
   packets from loss.  This memo argues that altering transport
   protocols is the more principled approach.

   A recently approved experimental RFC adapts its products.  Also an individual
   approach transport layer
   protocol to take account of packet sizes relative to Alcatel-Lucent drew typical TCP
   packet sizes.  This proposes a confirmation that it was very
   likely that none new small-packet variant of their products contained RED code TCP-
   friendly rate control [RFC5348] called TFRC-SP [RFC4828].
   Essentially, it proposes a rate equation that
   implemented any packet-size bias.

   Turning to our more formal survey (Table 2), about 19% inflates the flow rate
   by the ratio of those
   surveyed have replied so far, giving a sample typical TCP segment size of 16.  Although
   we do not have permission to identify (1500B including TCP
   header) over the respondents, we can say
   that those that have responded include most actual segment size [PktSizeEquCC].  (There are also
   other important differences of the larger vendors,
   covering detail relative to TFRC, such as using
   virtual packets [CCvarPktSize] to avoid responding to multiple losses
   per round trip and using a large fraction minimum inter-packet interval.)

   Section 4.5.1 of this TFRC-SP spec discusses the market.  They range across the large
   network equipment vendors at L3 & L2, firewall vendors, wireless
   equipment vendors, as well as large software businesses implications of
   operating in an environment where queues have been configured to drop
   smaller packets with a proportionately lower probability than larger
   ones.  But it only discusses TCP operating in such an environment,
   only mentioning TFRC-SP briefly when discussing how to define
   fairness with TCP.  And it only discusses the byte-mode dropping
   version of RED as it was before Cnodder et al pointed out it didn't
   sufficiently bias towards small
   selection packets to make TCP independent of networking products.
   packet size.

   So far, all those who have
   responded have confirmed that they have not implemented the variant TFRC-SP spec doesn't address the issue of RED with drop dependent on which of the network
   or the transport _should_ handle fairness between different packet size (2 were fairly sure they
   had
   sizes.  In its Appendix B.4 it discusses the possibility of both
   TFRC-SP and some network buffers duplicating each other's attempts to
   deliberately bias towards small packets.  But the discussion is not
   conclusive, instead reporting simulations of many of the
   possibilities in order to assess performance but needed not recommending any
   particular course of action.

   The paper originally proposing TFRC with virtual packets (VP-TFRC)
   [CCvarPktSize] proposed that there should perhaps be two variants to check more thoroughly).  We
   cater for the different variants of RED.  However, as the TFRC-SP
   authors point out, there is no way for a transport to know whether
   some queues on its path have established
   that Linux does not implement deployed RED with byte-mode packet size drop bias,
   although we have not investigated a wider range of open source code.

   +-------------------------------+----------------+-----------------+
   |                      Response | No. of vendors | %age of vendors |
   +-------------------------------+----------------+-----------------+
   |               Not implemented |             14 |             17% |
   |    Not implemented (probably) |              2 |              2% |
   |                   Implemented |              0 |              0% |
   |                   No response |             68 |             81% |
   | Total companies/orgs surveyed |             84 |            100% |
   +-------------------------------+----------------+-----------------+

    Table 2: Vendor Survey on
   (except if an exhaustive survey found that no-one has deployed it!--
   see Appendix A).  Incidentally, VP-TFRC also proposed that byte-mode drop variant of
   RED (lower drop
                      probability dropping should really square the packet-size compensation-factor
   (like that of Cnodder's RED_5, but apparently unaware of it).

   Pre-congestion notification [RFC5670] is an IETF technology to use a
   virtual queue for small packets)

   Where reasons AQM marking for packets within one Diffserv class
   in order to give early warning prior to any real queuing.  The PCN
   marking algorithms have been given, the extra complexity designed not to take account of packet bias
   code
   size when forwarding through queues.  Instead the general principle
   has been most prevalent, though one vendor had a more principled
   reason for avoiding it--similar to take account of the argument sizes of this document.

   Finally, we repeat that RED's byte mode drop SHOULD be disabled, but
   active queue management such as RED SHOULD be enabled wherever
   possible if we are to eradicate bias towards small packets--without
   any AQM at all, tail-drop tends to lock-out large marked packets very
   effectively.

   Our survey was of vendor implementations, so we cannot be certain
   about operator deployment.  But we believe many queues in when
   monitoring the
   Internet are still tail-drop.  The company fraction of one marking at the edge of the co-authors
   (BT) has widely deployed RED, but many tail-drop queues are there are
   bound network, as
   recommended here.

4.2.3.  Making Transports Robust against Control Packet Losses

   Recently, two RFCs have defined changes to still exist, particularly in access network equipment and on
   middleboxes like firewalls, where RED is not always available.

   Routers using a memory architecture based on fixed size buffers with
   borrowing may also still be prevalent in TCP that make it more
   robust against losing small control packets [RFC5562] [RFC5690].  In
   both cases they note that the Internet.  As explained
   in Section 4.2.1, case for these also provide a marginal (but legitimate) bias
   towards two TCP changes would be
   weaker if RED were biased against dropping small packets.  So even though  We argue
   here that these two proposals are a safer and more principled way to
   achieve TCP performance improvements than reverse engineering RED byte-mode drop is not
   prevalent, it is likely to
   benefit TCP.

   Although there is still some bias towards small
   packets in the Internet due are no known proposals, it would also be possible and
   perfectly valid to tail make control packets robust against drop and fixed buffer borrowing.

5.  Outstanding Issues and Next Steps

5.1.  Bit-congestible World

   For by
   explicitly requesting a connectionless network lower drop probability using their Diffserv
   code point [RFC2474] to request a scheduling class with nearly all resources being bit-
   congestible we believe lower drop.

   Although not brought to the recommended position is now unarguably
   clear--that IETF, a simple proposal from Wischik
   [DupTCP] suggests that the network first three packets of every TCP flow
   should not make allowance for packet sizes
   and be routinely duplicated after a short delay.  It shows that
   this would greatly improve the transport should.  This leaves two outstanding issues:

   o  How to handle any legacy chances of AQM with byte-mode drop already
      deployed;

   o  The need to start a programme to update transport congestion
      control protocol standards to take account short flows completing
   quickly, but it would hardly increase traffic levels on the Internet,
   because Internet bytes have always been concentrated in the large
   flows.  It further shows that the performance of packet size.

   The sample many typical
   applications depends on completion of returns long serial chains of short
   messages.  It argues that, given most of the value people get from our vendor survey Section 4.2.4 suggest
   that byte-mode packet drop seems not to be implemented
   the Internet is concentrated within short flows, this simple
   expedient would greatly increase the value of the best efforts
   Internet at all let
   alone deployed, minimal cost.

4.2.4.  Congestion Notification: Summary of Conflicting Advice

   +-----------+----------------+-----------------+--------------------+
   | transport |  RED_1 (packet |  RED_4 (linear  | RED_5 (square byte |
   |        cc |   mode drop)   | byte mode drop) |     mode drop)     |
   +-----------+----------------+-----------------+--------------------+
   |    TCP or if it is, it is likely to be very sparse.
   Therefore, we do not really need a migration strategy from all but
   nothing |    s/sqrt(p)   |    sqrt(s/p)    |      1/sqrt(p)     |
   |      TFRC |                |                 |                    |
   |   TFRC-SP |    1/sqrt(p)   |    1/sqrt(sp)   |    1/(s.sqrt(p))   |
   +-----------+----------------+-----------------+--------------------+

    Table 2: Dependence of flow bit-rate per RTT on packet size, s, and
   drop probability, p, when network and/or transport bias towards small
                        packets to nothing.

   A programme of standards updates varying degrees

   Table 2 aims to take account summarise the potential effects of packet size all the advice
   from different sources.  Each column shows a different possible AQM
   behaviour in
   transport congestion control protocols has started with TFRC-SP
   [RFC4828], while weighted TCPs implemented different queues in the research community
   [WindowPropFair] could form network, using the basis terminology
   of Cnodder et al outlined earlier (RED_1 is basic RED with packet-
   mode drop).  Each row shows a future change to different transport behaviour: TCP
   congestion control
   [RFC5681] itself.

5.2.  Bit- & Packet-congestible World

   Nonetheless, and TFRC [RFC5348] on the position is much less clear-cut if top row with TFRC-SP [RFC4828]
   below.  Each cell shows how the Internet
   becomes populated by a more even mix bits per round trip of both packet-congestible a flow depends
   on packet size, s, and
   bit-congestible resources.  If we believe we should allow drop probability, p.  In order to declutter
   the formulae to focus on packet-size dependence they are all given
   per round trip, which removes any RTT term.

   Let us assume that the goal is for this
   possibility in the future, this space contains bit-rate of a truly open research
   issue.

   We develop the concept flow to be
   independent of an idealised congestion notification
   protocol packet size.  Suppressing all inessential details, the
   table shows that supports both bit-congestible and packet-congestible
   resources this should either be achievable by not altering the
   TCP transport in Appendix A.  This congestion notification requires at
   least two flags for congestion of bit-congestible and packet-
   congestible resources.  This hides a fundamental problem--much more
   fundamental than whether we can magically create header space for yet
   another ECN flag RED_5 network, or using the small packet TFRC-SP
   transport (or similar) in a network without any byte-mode dropping
   RED (top right and bottom left).  Top left is the `do nothing'
   scenario, while bottom right is the `do-both' scenario in IPv4, or whether it which bit-
   rate would work while being become far too biased towards small packets.  Of course,
   if any form of byte-mode dropping RED has been deployed incrementally.  Distinguishing drop from delivery naturally
   provides just one congestion flag--it is hard to drop on a packet in two
   ways subset
   of queues that are distinguishable remotely.  This is congest, each path through the network will present a similar problem
   different hybrid scenario to its transport.

   Whatever, we can see that of distinguishing wireless transmission losses from congestive
   losses.

   This problem the linear byte-mode drop column in the
   middle would not be solved even if ECN were universally
   deployed.  A congestion notification protocol must survive considerably complicate the Internet.  It's a
   transition from low levels of congestion to high.  Marking two states
   is feasible with explicit marking, but much harder half-way
   house that doesn't bias enough towards small packets even if one
   believes the network should be doing the biasing.  Section 2
   recommends that _all_ bias in network equipment towards small packets are
   dropped.  Also, it will not always
   should be cost-effective to implement AQM
   at every low level resource, so drop will often turned off--if indeed any equipment vendors have to suffice.

   We should also note that, strictly, packet-congestible resources are
   actually cycle-congestible because load also depends on
   implemented it--leaving packet-size bias solely as the preserve of
   the transport layer (solely the leftmost, packet-mode drop column).

   In practice it seems that no deliberate bias towards small packets
   has been implemented for production networks.  Of the
   complexity 19% of each look-up vendors
   who responded to a survey of 84 equipment vendors, none had
   implemented byte-mode drop in RED (see Appendix A for details).

5.  Outstanding Issues and whether Next Steps

5.1.  Bit-congestible Network

   For a connectionless network with nearly all resources being bit-
   congestible the pattern of arrivals recommended position is
   amenable clear--that the network
   should not make allowance for packet sizes and the transport should.
   This leaves two outstanding issues:

   o  How to caching or not.  Further, this reminds us that handle any
   solution must not require a forwarding engine legacy of AQM with byte-mode drop already
      deployed;

   o  The need to use excessive
   processor cycles in order start a programme to decide how update transport congestion
      control protocol standards to say it has no spare
   processor cycles.

   Recently, the dual resource queue (DRQ) proposal [DRQ] has been made
   on the premise that, as network processors become more cost
   effective, per take account of packet operations will become more complex
   (irrespective size.

   A survey of whether more function in the network is desirable).
   Consequently the premise is equipment vendors (Section 4.2.4) found no evidence that CPU congestion
   byte-mode packet drop had been implemented, so deployment will become more
   common.  DRQ be
   sparse at best.  A migration strategy is a proposed modification not really needed to the RED remove
   an algorithm that
   folds both bit may not even be deployed.

   A programme of experimental updates to take account of packet size in
   transport congestion control protocols has already started with
   TFRC-SP [RFC4828].

5.2.  Bit- & Packet-congestible Network

   The position is much less clear-cut if the Internet becomes populated
   by a more even mix of both packet-congestible and packet congestion into one signal
   (either loss or ECN).

   The bit-congestible
   resources (see Appendix B.2).  This problem of signalling packet processing congestion is not pressing, as because
   most Internet resources are designed to be bit-
   congestible bit-congestible before
   packet processing starts to congest (see Section 1.1).  However, the

   The IRTF Internet congestion control research group (ICCRG) has set
   itself the task of reaching consensus on generic forwarding
   mechanisms that are necessary and sufficient to support the
   Internet's future congestion control requirements (the first
   challenge in [RFC6077]).  Therefore, rather than not giving
   this problem any thought at all, just because it is hard and
   currently hypothetical, we defer the question of whether
   packet congestion might become common and what to do if it does to
   the IRTF (the 'Small Packets' challenge in [RFC6077]).

6.  Security Considerations

   This draft memo recommends that queues do not bias drop probability towards
   small packets as this creates a perverse incentive for transports to
   break down their flows into tiny segments.  One of the benefits of
   implementing AQM was meant to be to remove this perverse incentive
   that drop-tail queues gave to small packets.  Of course, if
   transports really want to make the greatest gains, they don't have to
   respond to congestion anyway.  But we don't want applications that
   are trying to behave to discover that they can go faster by using
   smaller packets.

   In practice, transports cannot all be trusted to respond to
   congestion.  So another reason for recommending that queues do not
   bias drop probability towards small packets is to avoid the
   vulnerability to small packet DDoS attacks that would otherwise
   result.  One of the benefits of implementing AQM was meant to be to
   remove drop-tail's DoS vulnerability to small packets, so we
   shouldn't add it back again.

   If most queues implemented AQM with byte-mode drop, the resulting
   network would amplify the potency of a small packet DDoS attack.  At
   the first queue the stream of packets would push aside a greater
   proportion of large packets, so more of the small packets would
   survive to attack the next queue.  Thus a flood of small packets
   would continue on towards the destination, pushing regular traffic
   with large packets out of the way in one queue after the next, but
   suffering much less drop itself.

   Appendix B C explains why the ability of networks to police the
   response of _any_ transport to congestion depends on bit-congestible
   network resources only doing packet-mode not byte-mode drop.  In
   summary, it says that making drop probability depend on the size of
   the packets that bits happen to be divided into simply encourages the
   bits to be divided into smaller packets.  Byte-mode drop would
   therefore irreversibly complicate any attempt to fix the Internet's
   incentive structures.

7.  Conclusions

   This memo strongly recommends that identifies the size three distinct stages of an individual the congestion
   notification process where implementations need to decide whether to
   take packet
   that is dropped or marked should only be taken size into account when a
   transport reads this as a congestion indication, not when network
   equipment writes it. account.  The memo therefore strongly deprecates using
   RED's byte-mode recommendation of packet drop this memo is
   different in network equipment.

   Whether each case:

   o  When network equipment should measure measures the length of a queue by
   counting bytes or counting packets is a different question to queue, whether it should take into account the size of each packet being dropped
      counts in bytes or
   marked.  The answer packets depends on whether the network resource
      is congested respectively by bytes or by packets.

   o  When network equipment decides whether to drop (or mark) a packet,
      it is recommended that the size of the particular packet should
      not be taken into account

   o  However, when a transport algorithm responds to a dropped or
      marked packet, the size of the rate reduction should be
      proportionate to the size of the packet.

   In summary, the answers are 'it depends', 'no' and 'yes' respectively

   This means that RED's byte-mode queue measurement will often be
   appropriate even though although byte-mode drop is strongly deprecated.

   At the transport layer the IETF should continue updating congestion
   control protocols to take account of the size of each packet that
   indicates congestion.  Also the IETF should continue to make
   transports
   protocols less sensitive to losing control packets like SYNs, pure
   ACKs and DNS exchanges.  Although many control packets happen to be
   small, the alternative of network equipment favouring all small
   packets would be dangerous.  That would create perverse incentives to
   split data transfers into smaller packets.

   The memo develops these recommendations from principled arguments
   concerning scaling, layering, incentives, inherent efficiency,
   security and policability. policeability.  But it also addresses practical issues
   such as specific buffer architectures and incremental deployment.
   Indeed a limited survey of RED implementations is included, discussed, which
   shows there appears to be little, if any, installed base of RED's
   byte-mode drop.  Therefore it can be deprecated with little, if any,
   incremental deployment complications.

   The recommendations have been developed on the well-founded basis
   that most Internet resources are bit-congestible not packet-
   congestible.  We need to know the likelihood that this assumption
   will prevail longer term and, if it might not, what protocol changes
   will be needed to cater for a mix of the two.  These questions have
   been delegated  This problem is
   deferred to the IRTF. IRTF Internet Congestion Control Research Group
   (ICCRG).

8.  Acknowledgements

   Thank you to Sally Floyd, who gave extensive and useful review
   comments.  Also thanks for the reviews from Philip Eardley, David
   Black, Fred Baker, Toby Moncaster, Arnaud Jacquet and Mirja
   Kuehlewind as well as helpful explanations of different hardware
   approaches from Larry Dunn and Fred Baker.  We are grateful to Bruce
   Davie and his colleagues for providing a timely and efficient survey
   of RED implementation in Cisco's product range.  Also grateful thanks
   to Toby Moncaster, Will Dormann, John Regnault, Simon Carter and
   Stefaan De Cnodder who further helped survey the current status of
   RED implementation and deployment and, finally, thanks to the
   anonymous individuals who responded.

   Bob Briscoe and Jukka Manner are partly funded by Trilogy, a research
   project (ICT- 216372) supported by the European Community under its
   Seventh Framework Programme.  The views expressed here are those of
   the authors only.

9.  Comments Solicited

   Comments and questions are encouraged and very welcome.  They can be
   addressed to the IETF Transport Area working group mailing list
   <tsvwg@ietf.org>, and/or to the authors.

10.  References

10.1.  Normative References

   [RFC2119]                       Bradner, S., "Key words for use in
                                   RFCs to Indicate Requirement Levels",
                                   BCP 14, RFC 2119, March 1997.

   [RFC2309]                       Braden, B., Clark, D., Crowcroft, J.,
                                   Davie, B., Deering, S., Estrin, D.,
                                   Floyd, S., Jacobson, V., Minshall,
                                   G., Partridge, C., Peterson, L.,
                                   Ramakrishnan, K., Shenker, S.,
                                   Wroclawski, J., and L. Zhang,
                                   "Recommendations on Queue Management
                                   and Congestion Avoidance in the
                                   Internet", RFC 2309, April 1998.

   [RFC3168]                       Ramakrishnan, K., Floyd, S., and D.
                                   Black, "The Addition of Explicit
                                   Congestion Notification (ECN) to IP",
                                   RFC 3168, September 2001.

   [RFC3426]                       Floyd, S., "General Architectural and
                                   Policy Considerations", RFC 3426,
                                   November 2002.

   [RFC5033]                       Floyd, S. and M. Allman, "Specifying
                                   New Congestion Control Algorithms",
                                   BCP 133, RFC 5033, August 2007.

10.2.  Informative References

   [CCvarPktSize]                  Widmer, J., Boutremans, C., and J-Y.
                                   Le Boudec, "Congestion Control for
                                   Flows with Variable Packet Size", ACM
                                   CCR 34(2) 137--151, 2004, <http://
                                   doi.acm.org/10.1145/997150.997162>.

   [CHOKe_Var_Pkt]                 Psounis, K., Pan, R., and B.
                                   Prabhaker, "Approximate Fair Dropping
                                   for Variable Length Packets", IEEE
                                   Micro 21(1):48--56, January-
                                   February 2001, <http://
                                   www.stanford.edu/~balaji/papers/
                                   01approximatefair.pdf}>.

   [DRQ]                           Shin, M., Chong, S., and I. Rhee,
                                   "Dual-Resource TCP/AQM for
                                   Processing-Constrained Networks",
                                   IEEE/ACM Transactions on
                                   Networking Vol 16, issue 2,
                                   April 2008, <http://dx.doi.org/
                                   10.1109/TNET.2007.900415>.

   [DupTCP]                        Wischik, D., "Short messages", Royal
                                   Society workshop on networks:
                                   modelling and control ,
                                   September 2007, <http://
                                   www.cs.ucl.ac.uk/staff/ucacdjw/
                                   Research/shortmsg.html>.

   [ECNFixedWireless]              Siris, V., "Resource Control for
                                   Elastic Traffic in CDMA Networks",
                                   Proc. ACM MOBICOM'02 ,
                                   September 2002, <http://
                                   www.ics.forth.gr/netlab/publications/
                                   resource_control_elastic_cdma.html>.

   [Evol_cc]                       Gibbens, R. and F. Kelly, "Resource
                                   pricing and the evolution of
                                   congestion control",
                                   Automatica 35(12)1969--1985,
                                   December 1999, <http://
                                   www.statslab.cam.ac.uk/~frank/
                                   evol.html>.

   [I-D.ietf-avt-ecn-for-rtp]

   [I-D.ietf-avtcore-ecn-for-rtp]  Westerlund, M., Johansson, I.,
                                   Perkins, C., O'Hanlon, P., and K.
                                   Carlberg, "Explicit Congestion
                                   Notification (ECN) for RTP over UDP",
                                   draft-ietf-avt-ecn-for-rtp-03
                                   draft-ietf-avtcore-ecn-for-rtp-04
                                   (work in progress), October 2010. July 2011.

   [I-D.ietf-conex-concepts-uses]  Briscoe, B., Woundy, R., Moncaster,
                                   T., and J. Leslie, A.
                                   Cooper, "ConEx Concepts and Use
                                   Cases",
                                   draft-ietf-conex-concepts-uses-00
                                   draft-ietf-conex-concepts-uses-03
                                   (work in progress), November 2010. October 2011.

   [IOSArch]                       Bollapragada, V., White, R., and C.
                                   Murphy, "Inside Cisco IOS Software
                                   Architecture", Cisco Press: CCIE
                                   Professional Development ISBN13: 978-
                                   1-57870-181-0, July 2000.

   [MulTCP]                        Crowcroft, J. and Ph. Oechslin,
                                   "Differentiated End to End Internet
                                   Services using a Weighted
                                   Proportional Fair Sharing TCP",
                                   CCR 28(3) 53--69, July 1998, <http://
                                   www.cs.ucl.ac.uk/staff/J.Crowcroft/
                                   hipparch/pricing.html>.

   [PktSizeEquCC]                  Vasallo, P., "Variable Packet Size
                                   Equation-Based Congestion Control",
                                   ICSI Technical Report tr-00-008,
                                   2000, <http://http.icsi.berkeley.edu/
                                   ftp/global/pub/techreports/2000/
                                   tr-00-008.pdf>.

   [RED93]                         Floyd, S. and V. Jacobson, "Random
                                   Early Detection (RED) gateways for
                                   Congestion Avoidance", IEEE/ACM
                                   Transactions on Networking 1(4) 397--
                                   413, August 1993, <http://
                                   www.icir.org/floyd/papers/red/
                                   red.html>.

   [REDbias]                       Eddy, W. and M. Allman, "A Comparison
                                   of RED's Byte and Packet Modes",
                                   Computer Networks 42(3) 261--280,
                                   June 2003, <http://www.ir.bbn.com/
                                   documents/articles/redbias.ps>.

   [REDbyte]                       De Cnodder, S., Elloumi, O., and K.
                                   Pauwels, "RED behavior with different
                                   packet sizes", Proc. 5th IEEE
                                   Symposium on Computers and
                                   Communications (ISCC) 793--799,
                                   July 2000, <http://www.icir.org/
                                   floyd/red/Elloumi99.pdf>.

   [RFC2474]                       Nichols, K., Blake, S., Baker, F.,
                                   and D. Black, "Definition of the
                                   Differentiated Services Field (DS
                                   Field) in the IPv4 and IPv6 Headers",
                                   RFC 2474, December 1998.

   [RFC3448]                       Handley, M., Floyd,

   [RFC3550]                       Schulzrinne, H., Casner, S., Padhye, J.,
                                   Frederick, R., and J. Widmer, "TCP Friendly Rate
                                   Control (TFRC): V. Jacobson, "RTP:
                                   A Transport Protocol
                                   Specification", for Real-Time
                                   Applications", STD 64, RFC 3448,
                                   January 3550,
                                   July 2003.

   [RFC3714]                       Floyd, S. and J. Kempf, "IAB Concerns
                                   Regarding Congestion Control for
                                   Voice Traffic in the Internet",
                                   RFC 3714, March 2004.

   [RFC4828]                       Floyd, S. and E. Kohler, "TCP
                                   Friendly Rate Control (TFRC): The
                                   Small-Packet (SP) Variant", RFC 4828,
                                   April 2007.

   [RFC5348]                       Floyd, S., Handley, M., Padhye, J.,
                                   and J. Widmer, "TCP Friendly Rate
                                   Control (TFRC): Protocol
                                   Specification", RFC 5348,
                                   September 2008.

   [RFC5562]                       Kuzmanovic, A., Mondal, A., Floyd,
                                   S., and K. Ramakrishnan, "Adding
                                   Explicit Congestion Notification
                                   (ECN) Capability to TCP's SYN/ACK
                                   Packets", RFC 5562, June 2009.

   [RFC5670]                       Eardley, P., "Metering and Marking
                                   Behaviour of PCN-Nodes", RFC 5670,
                                   November 2009.

   [RFC5681]                       Allman, M., Paxson, V., and E.
                                   Blanton, "TCP Congestion Control",
                                   RFC 5681, September 2009.

   [RFC5690]                       Floyd, S., Arcia, A., Ros, D., and J.
                                   Iyengar, "Adding Acknowledgement
                                   Congestion Control to TCP", RFC 5690,
                                   February 2010.

   [RFC6077]                       Papadimitriou, D., Welzl, M., Scharf,
                                   M., and B. Briscoe, "Open Research
                                   Issues in Internet Congestion
                                   Control", RFC 6077, February 2011.

   [Rate_fair_Dis]                 Briscoe, B., "Flow Rate Fairness:
                                   Dismantling a Religion", ACM
                                   CCR 37(2)63--74, April 2007, <http://
                                   portal.acm.org/
                                   citation.cfm?id=1232926>.

   [WindowPropFair]                Siris, V., "Service Differentiation
                                   and Performance of Weighted Window-
                                   Based Congestion Control and Packet
                                   Marking Algorithms in ECN Networks",
                                   Computer Communications 26(4) 314--
                                   326, 2002, <http://www.ics.forth.gr/
                                   netgroup/publications/
                                   weighted_window_control.html>.

   [gentle_RED]                    Floyd, S., "Recommendation on using
                                   the "gentle_" variant of RED", Web
                                   page , March 2000, <http://
                                   www.icir.org/floyd/red/gentle.html>.

   [pBox]                          Floyd, S. and K. Fall, "Promoting the
                                   Use of End-to-End Congestion Control
                                   in the Internet", IEEE/ACM
                                   Transactions on Networking 7(4) 458--
                                   472, August 1999, <http://
                                   www.aciri.org/floyd/
                                   end2end-paper.html>.

   [pktByteEmail]                  Floyd, S., "RED: Discussions of Byte
                                   and Packet Modes", Web page Red Queue
                                   Management, email ,
                                   March 1997, <Available
                                   at: http://ee.lbl.gov/floyd/ <http://
                                   www-nrg.ee.lbl.gov/floyd/
                                   REDaveraging.txt>.

Appendix A.  Idealised Wire Protocol

   We will start by inventing an idealised congestion notification
   protocol before discussing how to make it practical.  The idealised
   protocol  Survey of RED Implementation Status

   This Appendix is shown informative, not normative.

   In May 2007 a survey was conducted of 84 vendors to be correct using examples later assess how widely
   drop probability based on packet size has been implemented in this
   appendix.

A.1.  Protocol Coding

   Congestion notification involves the congested resource coding RED
   Table 3.  About 19% of those surveyed replied, giving a
   congestion notification signal into the packet stream and the
   transports decoding it.  The idealised protocol uses two different
   (imaginary) fields sample size
   of 16.  Although in each datagram to signal congestion: one for
   byte congestion and one for packet congestion.

   We are not saying two ECN fields will be needed (and most cases we are do not
   saying that somehow a resource should be able have permission to drop identify
   the respondents, we can say that those that have responded include
   most of the larger equipment vendors, covering a packet in one large fraction of two different ways so that
   the transport can distinguish which
   sort of drop it was!).  These market.  The two congestion notification channels
   are just a conceptual device.  They allow us to defer having to
   decide whether who gave permission to distinguish between byte be identified were Cisco
   and packet congestion when Alcatel-Lucent.  The others range across the large network resource codes the signal or when the transport decodes
   it.

   However, although this idealised mechanism isn't intended for
   implementation, we do want to emphasise that we may need to find
   equipment vendors at L3 & L2, firewall vendors, wireless equipment
   vendors, as well as large software businesses with a
   way to implement it, because it could become necessary to somehow
   distinguish between bit and packet congestion [RFC3714].  Currently,
   packet-congestion is small selection
   of networking products.  All those who responded confirmed that they
   have not implemented the common case, variant of RED with drop dependent on packet
   size (2 were fairly sure they had not but there is no guarantee
   that it will needed to check more
   thoroughly).  At the time the survey was conducted, Linux did not become common
   implement RED with future technology trends.

   The idealised wire protocol is given below.  It accounts packet-size bias of drop, although we have not
   investigated a wider range of open source code.

   +-------------------------------+----------------+-----------------+
   |                      Response | No. of vendors | %age of vendors |
   +-------------------------------+----------------+-----------------+
   |               Not implemented |             14 |             17% |
   |    Not implemented (probably) |              2 |              2% |
   |                   Implemented |              0 |              0% |
   |                   No response |             68 |             81% |
   | Total companies/orgs surveyed |             84 |            100% |
   +-------------------------------+----------------+-----------------+

    Table 3: Vendor Survey on byte-mode drop variant of RED (lower drop
                      probability for packet
   sizes at the transport layer, not in the network, and then only in small packets)

   Where reasons have been given, the case extra complexity of bit-congestible resources.  This avoids the perverse
   incentive to send smaller packets and the DoS vulnerability that
   would otherwise result if the network were to packet bias towards them (see
   the motivating argument about avoiding perverse incentives in
   Section 3.3):

   1.  A packet-congestible resource trying to
   code congestion level p_p
       into has been most prevalent, though one vendor had a packet stream should mark more principled
   reason for avoiding it--similar to the idealised `packet
       congestion' field in each packet with probability p_p
       irrespective argument of this document.

   Our survey was of vendor implementations, so we cannot be certain
   about operator deployment.  But we believe many queues in the packet's size.
   Internet are still tail-drop.  The transport should then
       take a packet with the packet congestion field marked to mean
       just company of one mark, irrespective of the packet size.

   2.  A bit-congestible resource trying co-authors
   (BT) has widely deployed RED, but many tail-drop queues are bound to code time-varying byte-
       congestion level p_b into
   still exist, particularly in access network equipment and on
   middleboxes like firewalls, where RED is not always available.

   Routers using a packet stream should mark memory architecture based on fixed size buffers with
   borrowing may also still be prevalent in the `byte
       congestion' field Internet.  As explained
   in Section 4.2.1, these also provide a marginal (but legitimate) bias
   towards small packets.  So even though RED byte-mode drop is not
   prevalent, it is likely there is still some bias towards small
   packets in each packet with probability p_b, again
       irrespective the Internet due to tail drop and fixed buffer borrowing.

Appendix B.  Sufficiency of Packet-Mode Drop

   This Appendix is informative, not normative.

   Here we check that packet-mode drop (or marking) in the packet's size.  Unlike before, network gives
   sufficiently generic information for the transport
       should take a packet with the byte congestion field marked layer to
       count as use.  We
   check against a mark on each byte 2x2 matrix of four scenarios that may occur now or in
   the packet. future (Table 4).  The worked examples in Appendix A.2 show that transports can extract
   sufficient horizontal and correct congestion notification from these protocols
   for cases when two flows with different packet sizes vertical dimensions have matching
   bit rates or matching packet rates.  Examples are also given that mix
   these two flows into one
   been chosen because each tests extremes of sensitivity to show that a flow with mixed packet sizes
   would still be able to extract sufficient and correct information.

   Sufficient size
   in the transport and correct congestion information means in the network respectively.

   Note that there this section does not consider byte-mode drop at all.
   Having deprecated byte-mode drop, the goal here is to check that
   packet-mode drop will be sufficient information for the two different types in all cases.

   +-------------------------------+-----------------+-----------------+
   |                     Transport |  a) Independent | b) Dependent on |
   |                               |  of packet size |  packet size of |
   | Network                       |  of transport
   requirements:

   Ratio-based:  Established transport congestion controls like TCP's
      [RFC5681] aim to achieve equal segment rates per RTT through the
      same bottleneck--TCP friendliness [RFC3448].  They work with the
      ratio  |    congestion   |
   |                               |  notifications  |  notifications  |
   +-------------------------------+-----------------+-----------------+
   | 1) Predominantly              |   Scenario a1)  |   Scenario b1)  |
   | bit-congestible network       |                 |                 |
   | 2) Mix of dropped to delivered segments (or marked to unmarked
      segments in bit-congestible and |   Scenario a2)  |   Scenario b2)  |
   | pkt-congestible network       |                 |                 |
   +-------------------------------+-----------------+-----------------+

                Table 4: Four Possible Congestion Scenarios

   Appendix B.1 focuses on the case horizontal dimension of ECN).  The example scenarios show Table 4 checking
   that
      these ratio-based transports are effectively the same packet-mode drop (or marking) gives sufficient information,
   whether
      counting in bytes or packets, because the units cancel out.
      (Incidentally, this is why TCP's bit rate is still proportional to
      packet size even when byte-counting is used, as recommended for
      TCP in [RFC5681], mainly for orthogonal security reasons.)

   Absolute-target-based:  Other congestion controls proposed in not the
      research community aim to limit transport uses it--scenarios b) and a)
   respectively.

   Appendix B.2 focuses on the volume vertical dimension of congestion caused Table 4, checking
   that packet-mode drop gives sufficient information to
      a constant weight parameter.  [MulTCP][WindowPropFair] are
      examples of weighted proportionally fair transports designed for
      cost-fair environments [Rate_fair_Dis].  In this case, the transport requires a count (not a ratio) of dropped/marked bytes
   whether resources in the network are bit-congestible case and of dropped/marked packets in the
      packet or packet-
   congestible case.

A.2.  Example Scenarios

A.2.1.  Notation (these terms are defined in Section 1.1).

   Notation:  To prove our idealised wire protocol (Appendix A.1) is correct, be concrete, we will compare two flows with different
      packet sizes, s_1 and s_2.  As an example, we will take s_1 = 60B
      = 480b and s_2 [bit/
   pkt], = 1500B = 12,000b.

      A flow's bit rate, x [bps], is related to make sure their transports each see its packet rate, u
      [pps], by

         x(t) = s.u(t).

      In the correct bit-congestible case, path congestion
   notification.  Initially, within each flow we will take be denoted by
      p_b, and in the packet-congestible case by p_p.  When either case
      is implied, the letter p alone will denote path congestion.

B.1.  Packet-Size (In)Dependence in Transports

   In all cases we consider a packet-mode drop queue that indicates
   congestion by dropping (or marking) packets with probability p
   irrespective of packet size. We use an example value of loss
   (marking) probability, p=0.1%.

   A transport like RFC5681 TCP treats a congestion notification on any
   packet whatever its size as having equal sizes, but later we will generalise one event.  However, a network with just
   the packet-mode drop algorithm does give more information if the
   transport chooses to flows within
   which packet sizes vary.  A flow's bit rate, x [bit/s], is related use it.  We will use Table 5 to
   its packet rate, u [pkt/s], by
      x(t) = s.u(t). illustrate this.

   We will consider a 2x2 matrix set aside the last column until later.  The columns labelled
   "Flow 1" and "Flow 2" compare two flows consisting of four scenarios:

   +-----------------------------+------------------+------------------+
   |           resource type 60B and |   A) Equal bit   |   B) Equal pkt   |
   |            congestion level |       rates      |       rates      |
   +-----------------------------+------------------+------------------+
   |     i) bit-congestible, p_b |       (Ai)       |       (Bi)       |
   |    ii) pkt-congestible, p_p |       (Aii)      |       (Bii)      |
   +-----------------------------+------------------+------------------+

                                  Table 3

A.2.2.  Bit-congestible resource, 1500B
   packets respectively.  The body of the table considers two separate
   cases, one where the flows have equal bit rates (Ai)

   Starting bit-rate and the other with
   equal packet-rates.  In both cases, the bit-congestible scenario, for two flows to maintain fill a 96Mbps link.
   Therefore, in the equal bit rates (Ai) bit-rate case they each have half the ratio bit-
   rate (48Mbps).  Whereas, with equal packet-rates, flow 1 uses 25
   times smaller packets so it gets 25 times less bit-rate--it only gets
   1/(1+25) of the link capacity (96Mbps/26 = 4Mbps after rounding).  In
   contrast flow 2 gets 25 times more bit-rate (92Mbps) in the equal
   packet rates must rate case because its packets are 25 times larger.  The packet
   rate shown for each flow could easily be derived once the
   inverse of bit-rate
   was known by dividing bit-rate by packet size, as shown in the ratio column
   labelled "Formula".

       Parameter               Formula      Flow 1  Flow 2 Combined
       ----------------------- ----------- ------- ------- --------
       Packet size             s/8             60B  1,500B    (Mix)
       Packet size             s              480b 12,000b    (Mix)
       Pkt loss probability    p              0.1%    0.1%     0.1%

       EQUAL BIT-RATE CASE
       Bit-rate                x            48Mbps  48Mbps   96Mbps
       Packet-rate             u = x/s     100kpps   4kpps  104kpps
       Absolute pkt-loss-rate  p*u          100pps    4pps   104pps
       Absolute bit-loss-rate  p*u*s        48kbps  48kbps   96kbps
       Ratio of lost/sent pkts p*u/u          0.1%    0.1%     0.1%
       Ratio of packet sizes: u_2/u_1 lost/sent bits p*u*s/(u*s)    0.1%    0.1%     0.1%

       EQUAL PACKET-RATE CASE
       Bit-rate                x             4Mbps  92Mbps   96Mbps
       Packet-rate             u = s_1/s_2.  So, x/s       8kpps   8kpps   15kpps
       Absolute pkt-loss-rate  p*u            8pps    8pps    15pps
       Absolute bit-loss-rate  p*u*s         4kbps  92kbps   96kbps
       Ratio of lost/sent pkts p*u/u          0.1%    0.1%     0.1%
       Ratio of lost/sent bits p*u*s/(u*s)    0.1%    0.1%     0.1%

    Table 5: Absolute Loss Rates and Loss Ratios for
   instance, a flow Flows of 60B packets would have to send 25x more packets
   to achieve Small and
                      Large Packets and Both Combined

   So far we have merely set up the scenarios.  We now consider
   congestion notification in the scenario.  Two TCP flows with the same bit rate as a flow of 1500B packets.  If a
   congested resource marks proportion p_b
   round trip time aim to equalise their packet-loss-rates over time.
   That is the number of packets irrespective of
   size, lost in a second, which is the ratio of marked packets received
   per second (u) multiplied by the probability that each transport will
   still be one is dropped
   (p).  Thus TCP converges on the "Equal packet-rate" case, where both
   flows aim for the same as "Absolute packet-loss-rate" (both 8pps in the ratio of
   table).

   Packet-mode drop actually gives flows sufficient information to
   measure their packet rates, p_b.u_2/p_b.u_1
   = s_1/s_2.  So of the 25x more 60B loss-rate in bits per second, if they choose, not just
   packets sent, 25x more will be per second.  Each flow can count the size of a lost or marked than
   packet and scale its rate-response in proportion (as TFRC-SP does).
   The result is shown in the 1500B packet flow, but 25x more won't be marked
   too.

   In this scenario, row entitled "Absolute bit-loss-rate",
   where the resource bits lost in a second is bit-congestible, so it always uses
   our idealised bit-congestion field when it marks packets.  Therefore the transport should count marked bytes not packets.  But it doesn't
   actually matter for ratio-based transports like TCP (Appendix A.1).
   The ratio of marked to unmarked bytes seen packets per second (u)
   multiplied by each flow will be p_b,
   as will the ratio probability of marked losing a packet (p) multiplied by
   the packet size (s).  Such an algorithm would try to unmarked packets.  Because they are
   ratios, remove any
   imbalance in bit-loss-rate such as the units cancel out.

   If wide disparity in the "Equal
   packet-rate" case (4kbps vs. 92kbps).  Instead, a flow sent an inconsistent mixture of packet sizes, we have said
   it should count packet-size-
   dependent algorithm would aim for equal bit-loss-rates, which would
   drive both flows towards the ratio "Equal bit-rate" case, by driving them
   to equal bit-loss-rates (both 48kbps in this example).

   The explanation so far has assumed that each flow consists of marked and unmarked bytes not packets in
   order to correctly decode the level
   of congestion.  But actually, if
   all only one constant size.  Nonetheless, it is trying extends naturally to do is decode p_b, it still doesn't matter.  For
   instance, imagine the two equal bit rate
   flows were actually one flow
   at twice with mixed packet sizes.  In the bit rate sending right-most column of Table 5 a mixture
   flow of one 1500B packet for every
   thirty 60B packets. 25x more small mixed size packets will be marked is created simply by considering flow 1
   and 25x
   more will be unmarked.  The transport can still calculate p_b whether
   it uses bytes or packets for the ratio.  In general, flow 2 as a single aggregated flow.  There is no need for any
   algorithm which works on a ratio of marks flow
   to non-marks, either bytes
   or packets can be counted interchangeably, because the choice cancels
   out in the ratio calculation.

   However, where maintain an absolute target rather than relative volume of
   congestion caused is important (Appendix A.1), as it average packet size.  It is only necessary for
   congestion accountability [Rate_fair_Dis], the
   transport must count
   marked bytes not packets, in this bit-congestible case.  Aside from
   the goal of to scale its response to each congestion accountability, this is how the bit rate of a
   transport can be made independent of packet size; indication by ensuring the
   rate
   size of congestion caused is kept each individual lost (or marked) packet.  Taking for example
   the "Equal packet-rate" case, in one second about 8 small packets and
   8 large packets are lost (making closer to a constant weight
   [WindowPropFair], rather 15 than merely responding 16 losses per
   second due to rounding).  If the ratio transport multiplies each loss by
   its size, in one second it responds to 8*480b and 8*12,000b lost
   bits, adding up to 96,000 lost bits in a second.  This double checks
   correctly, being the same as 0.1% of
   marked and unmarked bytes.

   Note the unit total bit-rate of byte-congestion-volume 96Mbps.
   For completeness, the formula for absolute bit-loss-rate is p(u1*s1+
   u2*s2).

   Incidentally, a transport will always measure the byte.

A.2.3.  Bit-congestible resource, equal packet rates (Bi)

   If two flows send different packet sizes but at loss probability
   the same packet rate,
   their bit rates will be irrespective of whether it measures in packets or in bytes.
   In other words, the same ratio as their packet sizes, x_2/
   x_1 = s_2/s_1.  For instance, a flow sending 1500B of lost to sent packets at will be the same packet as
   the ratio of lost to sent bytes.  (This is why TCP's bit rate is
   still proportional to packet size even when byte-counting is used, as another sending
   recommended for TCP in [RFC5681], mainly for orthogonal security
   reasons.)  This is intuitively obvious by comparing two example
   flows; one with 60B packets packets, the other with 1500B packets.  If both
   flows pass through a queue with drop probability 0.1%, each flow will be sending at
   25x greater bit rate.
   lose 1 in 1,000 packets.  In this case, if a congested resource marks
   proportion p_b the stream of 60B packets irrespective of size, the ratio of packets
   received with the byte-congestion field marked by each transport
   bytes lost to sent will be 60B in every 60,000B; and in the same, p_b.u_2/p_b.u_1 = 1.

   Because stream of
   1500B packets, the byte-congestion field is marked, loss ratio will be 1,500B out of 1,500,000B. When
   the transport should
   count marked bytes not packets.  But because each flow sends
   consistently sized packets it still doesn't matter for ratio-based
   transports.  The responds to the ratio of marked lost to unmarked bytes seen by each flow sent packets, it will
   measure the same ratio whether it measures in packets or bytes: 0.1%
   in both cases.  The fact that this ratio is the same whether measured
   in packets or bytes can be p_b, as will seen in Table 5, where the ratio of marked lost
   to unmarked packets.
   Therefore, if the congestion control algorithm is only concerned with sent packets and the ratio of marked lost to unmarked packets (as sent bytes is TCP), both flows will always 0.1% in
   all cases (recall that the scenario was set up with p=0.1%).

   This discussion of how the ratio can be able to decode p_b correctly whether they count measured in packets or bytes.

   But if the absolute volume of congestion bytes
   is important, e.g. for
   congestion accountability, the only raised here to highlight that it is irrelevant to this memo!
   Whether a transport must count marked bytes not
   packets.  Then the lower bit rate flow using smaller packets will
   rightly be perceived as causing less byte-congestion even though its depends on packet rate size or not depends on how this
   ratio is used within the same.

   If congestion control algorithm.

   So far we have shown that packet-mode drop passes sufficient
   information to the two flows are mixed into one, of bit rate x1+x2, with equal
   packet rates of each size packet, transport layer so that the ratio p_b will still be
   measurable transport can take
   account of bit-congestion, by counting using the ratio sizes of marked to unmarked bytes (or
   packets because the ratio cancels out packets that
   indicate congestion.  We have also shown that the units).  However, transport can
   choose not to take packet size into account if the
   absolute volume of congestion is required, it wishes.  We will
   now consider whether the transport must count
   the sum of congestion marked bytes, can know which indeed gives to do.

B.2.  Bit-Congestible and Packet-Congestible Indications

   As a correct
   measure of the rate thought-experiment, imagine an idealised congestion notification
   protocol that supports both bit-congestible and packet-congestible
   resources.  It would require at least two ECN flags, one for each of byte-congestion p_b(x_1 + x_2) caused by the
   combined bit rate.

A.2.4.  Pkt-congestible resource, equal bit rates (Aii)

   Moving
   bit-congestible and packet-congestible resources.

   1.  A packet-congestible resource trying to code congestion level p_p
       into a packet stream should mark the case idealised `packet
       congestion' field in each packet with probability p_p
       irrespective of packet-congestible resources, we now the packet's size.  The transport should then
       take two
   flows that send different a packet sizes at the same bit rate, but this
   time with the pkt-congestion packet congestion field is marked by the resource with
   probability p_p.  As in scenario Ai with to mean
       just one mark, irrespective of the same bit rates but a packet size.

   2.  A bit-congestible resource, the flow with smaller packets will have resource trying to code time-varying byte-
       congestion level p_b into a
   higher packet rate, so more packets will be both marked and unmarked,
   but stream should mark the `byte
       congestion' field in each packet with probability p_b, again
       irrespective of the same proportion.

   This time, packet's size.  Unlike before, the transport
       should only count marks without taking into
   account take a packet sizes.  Transports will get the same result, p_p, by
   decoding with the ratio of byte congestion field marked to unmarked packets
       count as a mark on each byte in either flow.

   If one flow imitates the two flows but merged together, the bit rate
   will double with packet.

   This hides a fundamental problem--much more small packets fundamental than large.  The ratio whether
   we can magically create header space for yet another ECN flag, or
   whether it would work while being deployed incrementally.
   Distinguishing drop from delivery naturally provides just one
   implicit bit of marked congestion indication information--the packet is
   either dropped or not.  It is hard to unmarked packets will still drop a packet in two ways that
   are distinguishable remotely.  This is a similar problem to that of
   distinguishing wireless transmission losses from congestive losses.

   This problem would not be p_p.  But solved even if the absolute number ECN were universally
   deployed.  A congestion notification protocol must survive a
   transition from low levels of
   pkt-congestion marked packets congestion to high.  Marking two states
   is counted feasible with explicit marking, but much harder if packets are
   dropped.  Also, it will accumulate not always be cost-effective to implement AQM
   at the
   combined packet rate times the marking probability, p_p(u_1+u_2), 26x
   faster than every low level resource, so drop will often have to suffice.

   We are not saying two ECN fields will be needed (and we are not
   saying that somehow a resource should be able to drop a packet congestion accumulates in the single 1500B packet
   flow one
   of our example, as required.

   But if two different ways so that the transport is interested in the absolute number can distinguish which
   sort of packet
   congestion, drop it should just count how many marked packets arrive.  For
   instance, was!).  These two congestion notification channels
   are a flow sending 60B packets will see 25x more marked packets
   than one sending 1500B packets at the same bit rate, because it is
   sending more packets through conceptual device to illustrate a packet-congestible resource.

   Note dilemma we could face in the unit of packet congestion is
   future.  Section 3 gives four good reasons why it would be a packet.

A.2.5.  Pkt-congestible resource, equal packet rates (Bii)

   Finally, if two flows with the same bad idea
   to allow for packet rate, pass through a
   packet-congestible resource, they will both suffer the same
   proportion of marking, p_p, irrespective size by biasing drop probability in favour of their packet sizes.  On
   detecting that the pkt-congestion field is marked,
   small packets within the transport
   should count packets, and network.  The impracticality of our thought
   experiment shows that it will be able hard to extract the ratio p_p of
   marked give transports a practical
   way to unmarked packets from both flows, irrespective know whether to take account of packet
   sizes.

   Even if the transport is monitoring the absolute amount size of packets congestion over
   indication packets or not.

   Fortunately, this dilemma is not pressing because by design most
   equipment becomes bit-congested before its packet-processing becomes
   congested (as already outlined in Section 1.1).  Therefore transports
   can be designed on the relatively sound assumption that a period, still it congestion
   indication will see usually imply bit-congestion.

   Nonetheless, although the same amount of above idealised protocol isn't intended for
   implementation, we do want to emphasise that research is needed to
   predict whether there are good reasons to believe that packet
   congestion from either flow.

   And might become more common, and if so, to find a way to
   somehow distinguish between bit and packet congestion [RFC3714].

   Recently, the two equal dual resource queue (DRQ) proposal [DRQ] has been made
   on the premise that, as network processors become more cost
   effective, per packet rates operations will become more complex
   (irrespective of different size packets are mixed
   together whether more function in one flow, the packet rate network is desirable).
   Consequently the premise is that CPU congestion will double, so become more
   common.  DRQ is a proposed modification to the absolute
   volume RED algorithm that
   folds both bit congestion and packet congestion into one signal
   (either loss or ECN).

   Finally, we note one further complication.  Strictly, packet-
   congestible resources are often cycle-congestible.  For instance, for
   routing look-ups load depends on the complexity of packet-congestion will accumulate at twice each look-up and
   whether the rate pattern of
   either flow, 2p_p.u_1 = p_p(u_1+u_2). arrivals is amenable to caching or not.  This
   also reminds us that any solution must not require a forwarding
   engine to use excessive processor cycles in order to decide how to
   say it has no spare processor cycles.

Appendix B. C.  Byte-mode Drop Complicates Policing Congestion Response

   This appendix explains why the ability

   There are two main classes of networks approach to police policing congestion
   response: i) policing at each bottleneck link or ii) policing at the
   response
   edges of _any_ transport to congestion depends on bit-congestible
   network resources only doing packet-mode not networks.  Packet-mode drop in RED is compatible with
   either, while byte-mode drop.

   To be able to police a transport's response to congestion when
   fairness can only be judged over time and over all drop precludes edge policing.

   The simplicity of an individual's
   flows, the edge policer has relies on one dropped or marked
   packet being equivalent to have an integrated view another of all the
   congestion an individual (not just one flow) has caused due same size without having to all
   traffic entering
   know which link the drop or mark occurred at.  However, the Internet from that individual.  This is termed
   congestion accountability.

   But a byte-mode
   drop algorithm has to depend on the local MTU of the
   line - an algorithm line--it needs
   to use some concept of a 'normal' packet size.  Therefore, one
   dropped or marked packet from a byte-mode drop algorithm is not
   necessarily equivalent to another unless you from a different link.  A policing
   function local to the link can know the local MTU where the
   congestion occurred.  However, a policer at the queue where it
   was dropped/marked.  To have an integrated view edge of the network
   cannot, at least not without a user, we believe
   congestion lot of complexity.

   The early research proposals for type (i) policing has to be located at an individual's attachment
   point a bottleneck
   link [pBox] used byte-mode drop, then detected flows that contributed
   disproportionately to the Internet [I-D.ietf-conex-concepts-uses].  But from there
   it cannot know the MTU number of each remote queue packets dropped.  However, with
   no extra complexity, later proposals used packet mode drop and looked
   for flows that caused each drop/
   mark.  Therefore it cannot contributed a disproportionate amount of dropped bytes
   [CHOKe_Var_Pkt].

   Work is progressing on the congestion exposure protocol (ConEx
   [I-D.ietf-conex-concepts-uses]), which enables a type (ii) edge
   policer located at a user's attachment point.  The idea is to be able
   to take an integrated approach to policing
   all view of the responses to congestion effect of all a user's traffic on
   any link in the transports of one
   individual.  Therefore it cannot police anything.

   The security/incentive argument _for_ packet-mode drop is similar.
   Firstly, confining RED to packet-mode internetwork.  However, byte-mode drop would not
   effectively preclude
   bottleneck policing approaches such as [pBox] as it seems likely they
   could work just as well by monitoring the volume edge policing because of dropped bytes
   rather than packets.  Secondly packet-mode dropping/marking naturally
   allows the congestion notification MTU issue
   above.

   Indeed, making drop probability depend on the size of the packets
   that bits happen to be globally
   meaningful without relying on MTU information held elsewhere.

   Because we recommend that divided into would simply encourage the bits
   to be divided into smaller packets in order to confuse policing.  In
   contrast, as long as a dropped/marked packet should be is taken to mean that
   all the bytes in the packet are dropped/marked, a policer can remain
   robust against bits being re-divided into different size packets or
   across different size flows [Rate_fair_Dis].  Therefore
   policing would work naturally with just simple packet-mode drop in
   RED.

   In summary, making drop probability depend on the size of the packets
   that bits happen to be divided into simply encourages the bits to be
   divided into smaller packets.  Byte-mode drop would therefore
   irreversibly complicate any attempt to fix the Internet's incentive
   structures.

Appendix C. D.  Changes from Previous Versions

   To be removed by the RFC Editor on publication.

   Full incremental diffs between each version are available at
   <http://www.cs.ucl.ac.uk/staff/B.Briscoe/pubs.html#byte-pkt-congest>
   or
   <http://tools.ietf.org/wg/tsvwg/draft-ietf-tsvwg-byte-pkt-congest/>
   (courtesy of the rfcdiff tool):

   From -04 to -05:

      *  Changed from Informational to BCP and highlighted non-normative
         sections and appendices

      *  Removed language about consensus

      *  Added "Example Comparing Packet-Mode Drop and Byte-Mode Drop"

      *  Arranged "Motivating Arguments" into a more logical order and
         completely rewrote "Transport-Independent Network" & "Scaling
         Congestion Control with Packet Size" arguments.  Removed "Why
         Now?"

      *  Clarified applicability of certain recommendations

      *  Shifted vendor survey to an Appendix

      *  Cut down "Outstanding Issues and Next Steps"

      *  Re-drafted the start of the conclusions to highlight the three
         distinct areas of concern

      *  Completely re-wrote appendices

      *  Editorial corrections throughout.

   From -03 to -04:

      *  Reordered Sections 2 and 3, and some clarifications here and
         there based on feedback from Colin Perkins and Mirja
         Kuehlewind.

   From -02 to -03  (this version)

      *  Structural changes:

         +  Split off text at end of "Scaling Congestion Control with
            Packet Size" into new section "Transport-Independent
            Network"

         +  Shifted "Recommendations" straight after "Motivating
            Arguments" and added "Conclusions" at end to reinforce
            Recommendations

         +  Added more internal structure to Recommendations, so that
            recommendations specific to RED or to TCP are just
            corollaries of a more general recommendation, rather than
            being listed as a separate recommendation.

         +  Renamed "State of the Art" as "Critical Survey of Existing
            Advice" and retitled a number of subsections with more
            descriptive titles.

         +  Split end of "Congestion Coding: Summary of Status" into a
            new subsection called "RED Implementation Status".

         +  Removed text that had been in the Appendix "Congestion
            Notification Definition: Further Justification".

      *  Reordered the intro text a little.

      *  Made it clearer when advice being reported is deprecated and
         when it is not.

      *  Described AQM as in network equipment, rather than saying "at
         the network layer" (to side-step controversy over whether
         functions like AQM are in the transport layer but in network
         equipment).

      *  Minor improvements to clarity throughout

   From -01 to -02:

      *  Restructured the whole document for (hopefully) easier reading
         and clarity.  The concrete recommendation, in RFC2119 language,
         is now in Section 7.

   From -00 to -01:

      *  Minor clarifications throughout and updated references

   From briscoe-byte-pkt-mark-02 to ietf-byte-pkt-congest-00:

      *  Added note on relationship to existing RFCs

      *  Posed the question of whether packet-congestion could become
         common and deferred it to the IRTF ICCRG.  Added ref to the
         dual-resource queue (DRQ) proposal.

      *  Changed PCN references from the PCN charter & architecture to
         the PCN marking behaviour draft most likely to imminently
         become the standards track WG item.

   From -01 to -02:

      *  Abstract reorganised to align with clearer separation of issue
         in the memo.

      *  Introduction reorganised with motivating arguments removed to
         new Section 3.

      *  Clarified avoiding lock-out of large packets is not the main or
         only motivation for RED.

      *  Mentioned choice of drop or marking explicitly throughout,
         rather than trying to coin a word to mean either.

      *  Generalised the discussion throughout to any packet forwarding
         function on any network equipment, not just routers.

      *  Clarified the last point about why this is a good time to sort
         out this issue: because it will be hard / impossible to design
         new transports unless we decide whether the network or the
         transport is allowing for packet size.

      *  Added statement explaining the horizon of the memo is long
         term, but with short term expediency in mind.

      *  Added material on scaling congestion control with packet size
         (Section 3.1). 3.4).

      *  Separated out issue of normalising TCP's bit rate from issue of
         preference to control packets (Section 3.4). 3.2).

      *  Divided up Congestion Measurement section for clarity,
         including new material on fixed size packet buffers and buffer
         carving (Section 4.1.1 & Section 4.2.1) and on congestion
         measurement in wireless link technologies without queues
         (Section 4.1.2).

      *  Added section on 'Making Transports Robust against Control
         Packet Losses' (Section 4.2.3) with existing & new material
         included.

      *  Added tabulated results of vendor survey on byte-mode drop
         variant of RED (Table 2). 3).

   From -00 to -01:

      *  Clarified applicability to drop as well as ECN.

      *  Highlighted DoS vulnerability.

      *  Emphasised that drop-tail suffers from similar problems to
         byte-mode drop, so only byte-mode drop should be turned off,
         not RED itself.

      *  Clarified the original apparent motivations for recommending
         byte-mode drop included protecting SYNs and pure ACKs more than
         equalising the bit rates of TCPs with different segment sizes.
         Removed some conjectured motivations.

      *  Added support for updates to TCP in progress (ackcc & ecn-syn-
         ack).

      *  Updated survey results with newly arrived data.

      *  Pulled all recommendations together into the conclusions.

      *  Moved some detailed points into two additional appendices and a
         note.

      *  Considerable clarifications throughout.

      *  Updated references

Authors' Addresses

   Bob Briscoe
   BT
   B54/77, Adastral Park
   Martlesham Heath
   Ipswich  IP5 3RE
   UK

   Phone: +44 1473 645196
   EMail: bob.briscoe@bt.com
   URI:   http://bobbriscoe.net/
   Jukka Manner
   Aalto University
   Department of Communications and Networking (Comnet)
   P.O. Box 13000
   FIN-00076 Aalto
   Finland

   Phone: +358 9 470 22481
   EMail: jukka.manner@tkk.fi
   URI:   http://www.netlab.tkk.fi/~jmanner/