Transport Area Working Group                                  B. Briscoe
Internet-Draft                                                        BT
Updates: 2309 (if approved)                             October 23, 2009                                    J. Manner
Intended status: Informational                          Aalto University
Expires: April 26, January 13, 2011                                  July 12, 2010

                Byte and Packet Congestion Notification
                  draft-ietf-tsvwg-byte-pkt-congest-01
                  draft-ietf-tsvwg-byte-pkt-congest-02

Abstract

   This memo concerns dropping or marking packets using active queue
   management (AQM) such as random early detection (RED) or pre-
   congestion notification (PCN).  We give two strong recommendations:
   (1) packet size should not be taken into account when transports read
   congestion indications, not when network equipment writes them, and
   (2) byte-mode packet drop variant of AQM algorithms, such as RED,
   should not be used to drop fewer small packets.

Status of this This Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups. (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts.
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on April 26, 2010. January 13, 2011.

Copyright Notice

   Copyright (c) 2009 2010 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document (http://trustee.ietf.org/license-info). document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.

Abstract

   This memo concerns dropping or marking packets using active queue
   management (AQM) such  Code Components extracted from this document must
   include Simplified BSD License text as random early detection (RED) or pre-
   congestion notification (PCN).  The primary conclusion is that packet
   size should be taken into account when transports read congestion
   indications, not when network equipment writes them.  Reducing drop described in Section 4.e of small packets has some tempting advantages: i) it drops less
   control packets, which tend to be small
   the Trust Legal Provisions and ii) it makes TCP's bit-
   rate less dependent on packet size.  However, there are ways of
   addressing these issues at the transport layer, rather than reverse
   engineering network forwarding to fix specific transport problems.
   Network layer algorithms like provided without warranty as
   described in the byte-mode packet drop variant of
   RED should not be used to drop fewer small packets, because that
   creates a perverse incentive for transports to use tiny segments,
   consequently also opening up a DoS vulnerability.

Table Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  6  4
     1.1.  Requirements Notation  Terminology and Scoping  . . . . . . . . . . . . . . . . .  6
     1.2.  Why now? . .  9 . . . . . . . . . . . . . . . . . . . . . . .  7
   2.  Motivating Arguments . . . . . . . . . . . . . . . . . . . . .  9  8
     2.1.  Scaling Congestion Control with Packet Size  . . . . . . .  9  8
     2.2.  Avoiding Perverse Incentives to (ab)use Smaller Packets  . 10
     2.3.  Small != Control . . . . . . . . . . . . . . . . . . . . . 12 11
     2.4.  Implementation Efficiency  . . . . . . . . . . . . . . . . 12 11
   3.  Working Definition  The State of Congestion Notification  . . . the Art . . . . . 12
   4.  Congestion Measurement . . . . . . . . . . . . . . . . 11
     3.1.  Congestion Measurement: Status . . . . 13
     4.1.  Congestion Measurement by Queue Length . . . . . . . . . . 13
       4.1.1. 12
       3.1.1.  Fixed Size Packet Buffers  . . . . . . . . . . . . . . 13
     4.2.
       3.1.2.  Congestion Measurement without a Queue . . . . . . . . 14
     3.2.  Congestion Coding: Status  . . . 14
   5.  Idealised Wire Protocol Coding . . . . . . . . . . . . . 14
       3.2.1.  Network Bias when Encoding . . . 15
   6.  The State of the Art . . . . . . . . . . . 14
       3.2.2.  Transport Bias when Decoding . . . . . . . . . . 17
     6.1.  Congestion Measurement: Status . . . 16
       3.2.3.  Making Transports Robust against Control Packet
               Losses . . . . . . . . . . . 17
     6.2.  Congestion Coding: Status . . . . . . . . . . . . . 17
       3.2.4.  Congestion Coding: Summary of Status . . . 18
       6.2.1.  Network Bias when Encoding . . . . . . 18
   4.  Outstanding Issues and Next Steps  . . . . . . . . 18
       6.2.2.  Transport Bias when Decoding . . . . . . 20
     4.1.  Bit-congestible World  . . . . . . . 20
       6.2.3.  Making Transports Robust against Control Packet
               Losses . . . . . . . . . . . 20
     4.2.  Bit- & Packet-congestible World  . . . . . . . . . . . . . 21
       6.2.4.  Congestion Coding: Summary of Status
   5.  Recommendation and Conclusions . . . . . . . . . 22
   7.  Outstanding Issues and Next Steps . . . . . . . 22
     5.1.  Recommendation on Queue Measurement  . . . . . . . . 24
     7.1.  Bit-congestible World . . . 22
     5.2.  Recommendation on Notifying Congestion . . . . . . . . . . 23
     5.3.  Recommendation on Responding to Congestion . . . . . . . . 24
     7.2.  Bit- & Packet-congestible World
     5.4.  Recommended Future Research  . . . . . . . . . . . . . . . 24
   8.
   6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 25
   9.  Conclusions 24
   7.  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 25
   8.  Comments Solicited . . 26
   10. Acknowledgements . . . . . . . . . . . . . . . . . . . . 25
   9.  References . . . . 28
   11. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 28
   12. 25
     9.1.  Normative References . . . . . . . . . . . . . . . . . . . 25
     9.2.  Informative References . . . . . . . . . . 28
     12.1. Normative References . . . . . . . . 26
   Appendix A.  Congestion Notification Definition: Further
                Justification . . . . . . . . . . . 28
     12.2. Informative References . . . . . . . . . 30
   Appendix B.  Idealised Wire Protocol . . . . . . . . . . . 29
   Editorial Comments . . . . 30
     B.1.  Protocol Coding  . . . . . . . . . . . . . . . . . . . .
   Appendix A. . 30
     B.2.  Example Scenarios  . . . . . . . . . . . . . . . . . . 32
     A.1.  Notation . . 32
       B.2.1.  Notation . . . . . . . . . . . . . . . . . . . . . . . 32
     A.2.
       B.2.2.  Bit-congestible resource, equal bit rates (Ai) . . . . . . 32
     A.3.
       B.2.3.  Bit-congestible resource, equal packet rates (Bi)  . . . . 33
     A.4.
       B.2.4.  Pkt-congestible resource, equal bit rates (Aii)  . . . . . 34
     A.5.
       B.2.5.  Pkt-congestible resource, equal packet rates (Bii) . . . . 35

   Appendix B.  Congestion Notification Definition: Further
                Justification . . . . . . . . . . . . . . . . . . . . 35
   Appendix C.  Byte-mode Drop Complicates Policing C.  Byte-mode Drop Complicates Policing Congestion
                Response  . . . . . . . . . . . . . . . . . . . . . . 36
   Author's Address . . . . . . . . . . . . . . 35
   Appendix D.  Changes from Previous Versions  . . . . . . . . . . . 37

Changes from Previous Versions

   To be removed by 36

1.  Introduction

   When notifying congestion, the RFC Editor on publication.

   Full incremental diffs between each version are available at
   <http://www.cs.ucl.ac.uk/staff/B.Briscoe/pubs.html#byte-pkt-congest>
   or
   <http://tools.ietf.org/wg/tsvwg/draft-ietf-tsvwg-byte-pkt-congest/>
   (courtesy problem of the rfcdiff tool):

   From -00 how (and whether) to -01 (this version):

      *  Minor clarifications throughout take
   packet sizes into account has exercised the minds of researchers and updated references

   From briscoe-byte-pkt-mark-02
   practitioners for as long as active queue management (AQM) has been
   discussed.  Indeed, one reason AQM was originally introduced was to ietf-byte-pkt-congest-00:

      *  Added note
   reduce the lock-out effects that small packets can have on relationship large
   packets in drop-tail queues.  This memo aims to existing RFCs

      *  Posed state the question of whether packet-congestion could become
         common principles
   we should be using and deferred it to the IRTF ICCRG.  Added ref come to conclusions on what these
   principles will mean for future protocol design, taking into account
   the
         dual-resource queue (DRQ) proposal.

      *  Changed PCN references from deployments we have already.

   The byte vs. packet dilemma arises at three stages in the PCN charter & architecture to congestion
   notification process:

   Measuring congestion:  When the PCN marking behaviour draft most likely congested resource decides locally to imminently
         become
      measure how congested it is.  (Should the standards track WG item.

   From -01 to -02:

      *  Abstract reorganised to align with clearer separation of issue queue measure its length
      in bytes or packets?);

   Coding congestion notification into the memo.

      *  Introduction reorganised with motivating arguments removed wire protocol:  When the
      congested resource decides whether to
         new Section 2.

      *  Clarified avoiding lock-out of large packets is not notify the main or
         only motivation for RED.

      *  Mentioned choice level of drop
      congestion on each particular packet.  (When a queue considers
      whether to notify congestion by dropping or marking explicitly throughout,
         rather than trying to coin a word to mean either.

      *  Generalised particular
      packet, should its decision depend on the discussion throughout to any byte-size of the
      particular packet forwarding
         function on any network equipment, not just routers.

      *  Clarified being dropped or marked?);

   Decoding congestion notification from the last point about why this is a good time to sort
         out this issue: because it will be hard / impossible wire protocol:  When the
      transport interprets the notification in order to design
         new transports unless we decide whether the network or how much
      to respond to congestion.  (Should the transport is allowing for packet size.

      *  Added statement explaining take into account
      the horizon byte-size of each missing or marked packet?).

   Consensus has emerged over the years concerning the first stage:
   whether queues are measured in bytes or packets, termed byte-mode
   queue measurement or packet-mode queue measurement.  This memo is long
         term, but with short term expediency
   records this consensus in mind.

      *  Added material the RFC Series.  In summary the choice
   solely depends on scaling congestion control whether the resource is congested by bytes or
   packets.

   The controversy is mainly around the last two stages to do with packet size
         (Section 2.1).

      *  Separated out issue of normalising TCP's bit rate from issue of
         preference
   encoding congestion notification into packets: whether to control packets (Section 2.3).

      *  Divided up Congestion Measurement section allow for clarity,
         including new material on fixed
   the size of the specific packet buffers and buffer
         carving (Section 4.1.1 & Section 6.2.1) and on notifying congestion
         measurement in wireless link technologies without queues
         (Section 4.2).

      *  Added section i) when the
   network encodes or ii) when the transport decodes the congestion
   notification.

   Currently, the RFC series is silent on 'Making Transports Robust against Control
         Packet Losses' (Section 6.2.3) with existing & new material
         included.

      *  Added tabulated results this matter other than a paper
   trail of vendor survey on advice referenced from [RFC2309], which conditionally
   recommends byte-mode (packet-size dependent) drop
         variant [pktByteEmail].
   The primary purpose of RED (Table 2).

      *

   From -00 this memo is to -01:

      *  Clarified applicability build a definitive consensus
   against such deliberate preferential treatment for small packets in
   AQM algorithms and to drop as well as ECN.

      *  Highlighted DoS vulnerability.

      *  Emphasised that drop-tail suffers from similar problems record this advice within the RFC series.
   Fortunately all the implementers who responded to
         byte-mode drop, so only byte-mode drop should be turned off, our survey
   (Section 3.2.4) have not RED itself.

      *  Clarified followed the original apparent motivations for recommending
         byte-mode drop included protecting SYNs and pure ACKs more than
         equalising earlier advice, so the bit rates of TCPs with different segment sizes.
         Removed some conjectured motivations.

      *  Added support
   consensus this memo argues for updates seems to TCP already exist in progress (ackcc & ecn-syn-
         ack).

      *  Updated survey results with newly arrived data.

      *  Pulled all recommendations together into the conclusions.

      *  Moved some detailed points into two additional appendices and a
         note.

      *  Considerable clarifications throughout.

      *  Updated references

1.  Introduction

   When notifying congestion, the problem
   implementations.

   The primary conclusion of how (and whether) to take this memo is that packet sizes size should be
   taken into account has exercised the minds when transports read congestion indications, not
   when network equipment writes them.  Reducing drop of researchers and
   practitioners for as long as active queue management (AQM) has been
   discussed.  Indeed, one reason AQM was originally introduced was to
   reduce the lock-out effects that small packets can have on large
   packets in drop-tail queues.  This memo aims
   has some tempting advantages: i) it drops less control packets, which
   tend to state the principles
   we should be using small and to come to conclusions ii) it makes TCP's bit-rate less dependent on what
   packet size.  However, there are ways of addressing these
   principles will mean for future protocol design, taking into account issues at
   the deployments we have already.

   Note transport layer, rather than reverse engineering network
   forwarding to fix specific transport problems.

   The second conclusion is that network layer algorithms like the byte vs. byte-
   mode packet dilemma concerns congestion
   notification irrespective of whether it is signalled implicitly by drop or using explicit congestion notification (ECN [RFC3168] or PCN
   [I-D.ietf-pcn-marking-behaviour]).  Throughout this document, unless
   clear from the context, the term marking will be used to mean
   notifying congestion explicitly, while congestion notification will variant of RED should not be used to mean notifying congestion either implicitly by drop or
   explicitly by marking.

   If the load on fewer
   small packets, because that creates a resource depends on the rate at which packets
   arrive, it is called packet-congestible.  If the load depends on the
   rate at which bits arrive it perverse incentive for
   transports to use tiny segments, consequently also opening up a DoS
   vulnerability.

   This memo is called bit-congestible.

   Examples of packet-congestible resources are route look-up engines
   and firewalls, because load depends on initially concerned with how many we should correctly scale
   congestion control functions with packet headers they
   have size for the long term.  But
   it also recognises that expediency may be necessary to process.  Examples deal with
   existing widely deployed protocols that don't live up to the long
   term goal.  It turns out that the 'correct' variant of bit-congestible resources are
   transmission links, radio power and most buffer memory, because RED to deploy
   seems to be the
   load depends on how many bits they have one everyone has deployed, and no-one who responded
   to transmit or store.  Some
   machine architectures use fixed size packet buffers, so buffer memory
   in these cases our survey has implemented the other variant.  However, at the
   transport layer, TCP congestion control is packet-congestible (see Section 4.1.1).

   Note a widely deployed protocol
   that information is generally processed or transmitted we argue doesn't scale correctly with packet size.  To date this
   hasn't been a
   minimum granularity greater significant problem because most TCPs have been used
   with similar packet sizes.  But, as we design new congestion
   controls, we should build in scaling with packet size rather than a bit (e.g. octets).  The
   appropriate granularity for
   assuming we should follow TCP's example.

   This memo continues as follows.  Terminology and scoping are
   discussed next, and the resource in question SHOULD be used,
   but for the sake of brevity we will talk in terms of bytes reasons to make the recommendations presented
   in this
   memo.

   Resources may be congestible at higher levels of granularity than
   packets, memo now are given in Section 1.2.  Motivating arguments for instance stateful firewalls
   our advice are flow-congestible given in Section 2.  We then survey the advice given
   previously in the RFC series, the research literature and
   call-servers are session-congestible.  This memo focuses on
   congestion of connectionless resources, but the same principles may
   be applicable for congestion notification
   deployed legacy (Section 3) before listing outstanding issues
   (Section 4) that will need resolution both to inform future protocols controlling per-
   flow
   designs and per-session processing or state. to handle legacy.  We then give concrete recommendations
   for the way forward in (Section 5).  We finally give security
   considerations in Section 6.  The interested reader can also find
   further discussions about the theme of byte vs. packet dilemma arises at three stages in the congestion
   notification process:

   Measuring congestion  When
   appendices.

   This memo intentionally includes a non-negligible amount of material
   on the congested resource decides locally how subject.  A busy reader can jump right into Section 5 to measure how congested it is.  (Should read
   a summary of the queue recommendations for the Internet community.

1.1.  Terminology and Scoping

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be measured interpreted as described in
      bytes or packets?);

   Coding congestion notification into the wire protocol:  When the
      congested resource decides how [RFC2119].

   Congestion Notification:  Rather than aim to notify the level of achieve what many have
      tried and failed, this memo will not try to define congestion.
      (Should the level  It
      will give a working definition of what congestion notification depend on the byte-size of each
      particular packet carrying the notification?);

   Decoding congestion notification from the wire protocol:  When the
      transport interprets the notification.  (Should the byte-size of a
      missing or marked packet
      should be taken into account?).

   In RED, whether to use packets or bytes when measuring queues is
   called packet-mode or byte-mode queue measurement.  This choice is
   now fairly well understood but mean for this document.  Congestion
      notification is included in Section 4 a changing signal that aims to document
   it in communicate the RFC series.

   The controversy
      ratio E/L. E is mainly around the other two stages: whether instantaneous excess load offered to
   allow for packet size when the network codes or when the transport
   decodes congestion notification.  In RED, the variant that reduces
   drop probability for packets based on their size in bytes is called
   byte-mode drop, while the variant a
      resource that doesn't it is called packet mode
   drop.  Whether queues are measured in bytes either incapable of serving or packets unwilling to
      serve.  L is an
   orthogonal choice, termed byte-mode queue measurement or packet-mode
   queue measurement.

   Currently, the RFC series instantaneous offered load.

      The phrase `unwilling to serve' is silent on this matter other than added, because AQM systems
      (e.g.  RED, PCN [RFC5670]) set a paper
   trail of advice referenced from [RFC2309], which conditionally
   recommends byte-mode (packet-size dependent) drop [pktByteEmail].
   However, all virtual limit smaller than the implementers who responded
      actual limit to our survey
   (Section 6.2.4) have not followed this advice.  The primary purpose
   of the resource, then notify when this memo virtual limit
      is to build a definitive consensus against deliberate
   preferential treatment for small packets exceeded in AQM algorithms and order to
   record this advice within avoid congestion of the RFC series.

   Now is a good time to discuss whether fairness between different
   sized packets would best be implemented actual capacity.

      Note that the denominator is offered load, not capacity.
      Therefore congestion notification is a real number bounded by the
      range [0,1].  This ties in with the network layer, or at most well-understood measure
      of congestion notification: drop fraction (often loosely called
      loss rate).  It also means that congestion has a natural
      interpretation as a probability; the transport, for probability of offered
      traffic not being served (or being marked as at risk of not being
      served).  Appendix A describes a number further incidental benefit that
      arises from using load as the denominator of reasons:

   1. congestion
      notification.

   Explicit and Implicit Notification:  The packet vs. byte issue requires speedy resolution because the
       IETF pre-congestion vs. packet dilemma
      concerns congestion notification (PCN) working group is about to
       standardise the external behaviour irrespective of a whether it is
      signalled implicitly by drop or using explicit congestion
      notification (ECN [RFC3168] or PCN [RFC5670]).  Throughout this
      document, unless clear from the context, the term marking will be
      used to mean notifying congestion explicitly, while congestion
      notification (AQM) algorithm [I-D.ietf-pcn-marking-behaviour];

   2.  [RFC2309] says RED may will be used to mean notifying congestion either take account of packet size
      implicitly by drop or not
       when dropping, but gives no recommendation between explicitly by marking.

   Bit-congestible vs. Packet-congestible:  If the two,
       referring instead to advice load on a resource
      depends on the performance implications in an
       email [pktByteEmail], rate at which recommends byte-mode drop.  Further,
       just before RFC2309 was issued, an addendum was added to packets arrive, it is called packet-
      congestible.  If the
       archived email that revisited load depends on the issue of packet vs. byte-mode
       drop in its last para, making the recommendation less clear-cut;

   3.  Without the present memo, the only advice in the RFC series rate at which bits arrive
      it is called bit-congestible.

      Examples of packet-congestible resources are route look-up engines
      and firewalls, because load depends on how many packet size bias in AQM algorithms would be a reference headers
      they have to an
       archived email in [RFC2309] (including an addendum at the end process.  Examples of bit-congestible resources are
      transmission links, radio power and most buffer memory, because
      the email to correct the original).

   4.  The IRTF Internet Congestion Control Research Group (ICCRG)
       recently took on the challenge of building consensus load depends on what
       common congestion control support should be required from network
       forwarding functions in future
       [I-D.irtf-iccrg-welzl-congestion-control-open-research].  The
       wider Internet community needs how many bits they have to discuss whether the complexity
       of adjusting for packet size should be in the network transmit or store.
      Some machine architectures use fixed size packet buffers, so
      buffer memory in
       transports;

   5.  Given there are many good reasons why larger path max
       transmission units (PMTUs) would help solve these cases is packet-congestible (see
      Section 3.1.1).

      Currently a number design goal of scaling
       issues, we don't want network processing equipment such as
      routers and firewalls is to create any bias against large packets keep packet processing uncongested
      even under worst case bit rates with minimum packet sizes.
      Therefore, packet-congestion is currently rare, but there is no
      guarantee that it will not become common with future technology
      trends.

      Note that information is generally processed or transmitted with a
      minimum granularity greater than their true cost;

   6. a bit (e.g. octets).  The IETF has started to consider
      appropriate granularity for the question of fairness between
       flows that use different packet sizes (e.g. resource in question should be
      used, but for the small-packet
       variant sake of TCP-friendly rate control, TFRC-SP [RFC4828]).  Given
       transports with different packet sizes, if brevity we don't decide
       whether the network or the transport should allow for packet
       size, it will talk in terms of bytes
      in this memo.

   Coarser granularity:  Resources may be hard if not impossible to design any transport
       protocol so that its bit-rate relative to other transports meets
       design guidelines [RFC5033] (Note however that, if the concern
       were fairness between users, rather than between flows
       [Rate_fair_Dis], relative rates between flows would have to come
       under run-time control rather congestible at higher levels
      of granularity than being embedded in protocol
       designs). packets, for instance stateful firewalls are
      flow-congestible and call-servers are session-congestible.  This
      memo is initially concerned with how we should correctly scale focuses on congestion control functions with packet size for of connectionless resources, but the long term.  But
   it also recognises that expediency
      same principles may be necessary to deal with
   existing widely deployed applicable for congestion notification
      protocols that don't live up controlling per-flow and per-session processing or
      state.

   RED Terminology:  In RED, whether to use packets or bytes when
      measuring queues is respectively called packet-mode or byte-mode
      queue measurement.  And if the long
   term goal.  It turns out that probability of dropping a packet
      depends on its byte-size it is called byte-mode drop, whereas if
      the 'correct' variant drop probability is independent of RED to deploy
   seems a packet's byte-size it is
      called packet-mode drop.

1.2.  Why now?

   Now is a good time to be the one everyone has deployed, and no-one who responded
   to our survey has discuss whether fairness between different
   sized packets would best be implemented in the other variant.  However, network layer, or at
   the
   transport layer, TCP congestion control is transport, for a widely deployed protocol
   that we argue doesn't scale correctly with number of reasons:

   1.  The packet size.  To date this
   hasn't been a significant problem vs. byte issue requires speedy resolution because most TCPs have been used
   with similar packet sizes.  But, as we design new the
       IETF pre-congestion notification (PCN) working group is
       standardising the external behaviour of a PCN congestion
   controls, we should build in scaling with
       notification (AQM) algorithm [RFC5670];

   2.  [RFC2309] says RED may either take account of packet size rather than
   assuming we should follow TCP's example.

   Motivating arguments for our or not
       when dropping, but gives no recommendation between the two,
       referring instead to advice are given next on the performance implications in Section 2.
   Then an
       email [pktByteEmail], which recommends byte-mode drop.  Further,
       just before RFC2309 was issued, an addendum was added to the body of
       archived email that revisited the memo starts from first principles, defining
   congestion notification issue of packet vs. byte-mode
       drop in Section 3 then determining its last paragraph, making the correct way
   to measure congestion (Section 4) and to design an idealised
   congestion notification protocol (Section 5).  It then surveys recommendation less clear-
       cut;

   3.  Without the present memo, the only advice given previously in the RFC series, series on
       packet size bias in AQM algorithms would be a reference to an
       archived email in [RFC2309] (including an addendum at the research literature
   and end of
       the deployed legacy (Section 6) before listing outstanding issues
   (Section 7) that will need resolution both email to achieve correct the ideal
   protocol and original).

   4.  The IRTF Internet Congestion Control Research Group (ICCRG)
       recently took on the challenge of building consensus on what
       common congestion control support should be required from network
       forwarding functions in future [I-D.irtf-iccrg-welzl].  The wider
       Internet community needs to handle legacy.  After discussing security
   considerations (Section 8) strong recommendations for discuss whether the way forward
   are given complexity of
       adjusting for packet size should be in the conclusions (Section 9).

1.1.  Requirements Notation

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" network or in this
   document
       transports;

   5.  Given there are many good reasons why larger path max
       transmission units (PMTUs) would help solve a number of scaling
       issues, we don't want to create any bias against large packets
       that is greater than their true cost;

   6.  The IETF has started to consider the question of fairness between
       flows that use different packet sizes (e.g. in the small-packet
       variant of TCP-friendly rate control, TFRC-SP [RFC4828]).  Given
       transports with different packet sizes, if we don't decide
       whether the network or the transport should allow for packet
       size, it will be interpreted as described hard if not impossible to design any transport
       protocol so that its bit-rate relative to other transports meets
       design guidelines [RFC5033] (Note however that, if the concern
       were fairness between users, rather than between flows
       [Rate_fair_Dis], relative rates between flows would have to come
       under run-time control rather than being embedded in [RFC2119]. protocol
       designs).

2.  Motivating Arguments

2.1.  Scaling Congestion Control with Packet Size

   There are two ways of interpreting a dropped or marked packet.  It
   can either be considered as a single loss event or as loss/marking of
   the bytes in the packet.  Here we try to design a test to see which
   approach scales with packet size.

   Given bit-congestible is the more common case, case (see Section 1.1),
   consider a bit-
   congestible bit-congestible link shared by many flows, so that each
   busy period tends to cause packets to be lost from different flows.
   The test compares two identical scenarios with the same applications,
   the same numbers of sources and the same load.  But the sources break
   the load into large packets in one scenario and small packets in the
   other.  Of course, because the load is the same, there will be
   proportionately more packets in the small packet case.

   The test of whether a congestion control scales with packet size is
   that it should respond in the same way to the same congestion
   excursion, irrespective of the size of the packets that the bytes
   causing congestion happen to be broken down into.

   A bit-congestible queue suffering a congestion excursion has to drop
   or mark the same excess bytes whether they are in a few large packets
   or many small packets.  So for the same congestion excursion, the
   same amount of bytes have to be shed to get the load back to its
   operating point.  But, of course, for smaller packets more packets
   will have to be discarded to shed the same bytes.

   If all the transports interpret each drop/mark as a single loss event
   irrespective of the size of the packet dropped, those with smaller
   packets will respond more to the same congestion excursion, failing
   our test.  On the other hand, if they respond proportionately less
   when smaller packets are dropped/marked, overall they will be able to
   respond the same to the same congestion excursion.

   Therefore, for a congestion control to scale with packet size it
   should respond to dropped or marked bytes (as TFRC-SP [RFC4828]
   effectively does), not just to dropped or marked packets irrespective
   of packet size (as TCP does).

   The email [pktByteEmail] referred to by RFC2309 says the question of
   whether a packet's own size should affect its drop probability
   "depends on the dominant end-to-end congestion control mechanisms".
   But we argue the network layer should not be optimised for whatever
   transport is predominant.

   TCP congestion control ensures that flows competing for the same
   resource each maintain the same number of segments in flight,
   irrespective of segment size.  So under similar conditions, flows
   with different segment sizes will get different bit rates.  But even
   though reducing the drop probability of small packets helps ensure
   TCPs with different packet sizes will achieve similar bit rates, we
   argue this correction should be achieved in made to TCP itself, not in to the network.
   network in order to fix one transport, no matter how prominent it is.

   Effectively, favouring small packets is reverse engineering of the
   network layer around TCP, contrary to the excellent advice in
   [RFC3426], which asks designers to question "Why are you proposing a
   solution at this layer of the protocol stack, rather than at another
   layer?"

2.2.  Avoiding Perverse Incentives to (ab)use Smaller Packets

   Increasingly, it is being recognised that a protocol design must take
   care not to cause unintended consequences by giving the parties in
   the protocol exchange perverse incentives [Evol_cc][RFC3426].  Again,
   imagine a scenario where the same bit rate of packets will contribute
   the same to congestion bit-congestion of a link irrespective of whether it is
   sent as fewer larger packets or more smaller packets.  A protocol
   design that caused larger packets to be more likely to be dropped
   than smaller ones would be dangerous in this case:

   Normal transports:  Even if a transport is not actually malicious, if
      it finds small packets go faster, over time it will tend to act in
      its own interest and use them.  Queues that give advantage to
      small packets create an evolutionary pressure for transports to
      send at the same bit-rate but break their data stream down into
      tiny segments to reduce their drop rate.  Encouraging a high
      volume of tiny packets might in turn unnecessarily overload a
      completely unrelated part of the system, perhaps more limited by
      header-processing than bandwidth.

   Malicious transports:  A queue that gives an advantage to small
      packets can be used to amplify the force of a flooding attack.  By
      sending a flood of small packets, the attacker can get the queue
      to discard more traffic in large packets, allowing more attack
      traffic to get through to cause further damage.  Such a queue
      allows attack traffic to have a disproportionately large effect on
      regular traffic without the attacker having to do much work.

      Note that, although the byte-mode drop variant of RED amplifies
      small packet attacks, drop-tail queues amplify small packet
      attacks even more (see Security Considerations in Section 8). 6).
      Wherever possible neither should be used.

   Normal transports:  Even if a transport is not malicious, if it finds
      small packets go faster, it will tend to act in its own interest
      and use them.  Queues that give advantage to small packets create
      an evolutionary pressure for transports to send at the same bit-
      rate but break their data stream down into tiny segments to reduce
      their drop rate.  Encouraging a high volume of tiny packets might
      in turn unnecessarily overload a completely unrelated part of the
      system, perhaps more limited by header-processing than bandwidth.

   Imagine two unresponsive flows arrive at a bit-congestible
   transmission link each with the same bit rate, say 1Mbps, but one
   consists of 1500B and the other 60B packets, which are 25x smaller.
   Consider a scenario where gentle RED [gentle_RED] is used, along with
   the variant of RED we advise against, i.e. where the RED algorithm is
   configured to adjust the drop probability of packets in proportion to
   each packet's size (byte mode packet drop).  In this case, if RED
   drops 25% of the larger packets, it will aim to drop 1% of the
   smaller packets (but in practice it may drop more as congestion
   increases [RFC4828](S.B.4)[Note_Variation]). [RFC4828](S.B.4)).  Even though both flows arrive with the
   same bit rate, the bit rate the RED queue aims to pass to the line
   will be 750k for the flow of larger packet but 990k for the smaller
   packets (but because of rate variation it will be less than this
   target).

   It can be seen that this behaviour reopens the same denial of service
   vulnerability that drop tail queues offer to floods of small packet,
   though not necessarily as strongly (see Section 8). 6).

2.3.  Small != Control

   It is tempting to drop small packets with lower probability to
   improve performance, because many control packets are small (TCP SYNs
   & ACKs, DNS queries & responses, SIP messages, HTTP GETs, etc) and
   dropping fewer control packets considerably improves performance.
   However, we must not give control packets preference purely by virtue
   of their smallness, otherwise it is too easy for any data source to
   get the same preferential treatment simply by sending data in smaller
   packets.  Again we should not create perverse incentives to favour
   small packets rather than to favour control packets, which is what we
   intend.

   Just because many control packets are small does not mean all small
   packets are control packets.

   So again, rather than fix these problems in the network layer, we
   argue that the transport should be made more robust against losses of
   control packets (see 'Making Transports Robust against Control Packet
   Losses' in Section 6.2.3). 3.2.3).

2.4.  Implementation Efficiency

   Allowing for packet size at the transport rather than in the network
   ensures that neither the network nor the transport needs to do a
   multiply operation--multiplication by packet size is effectively
   achieved as a repeated add when the transport adds to its count of
   marked bytes as each congestion event is fed to it.  This isn't a
   principled reason in itself, but it is a happy consequence of the
   other principled reasons.

3.  Working Definition of Congestion Notification

   Rather than aim to achieve what many have tried and failed, this memo
   will not try to define congestion.  It will give a working definition  The State of what congestion notification should be taken to mean the Art

   The original 1993 paper on RED [RED93] proposed two options for this
   document.  Congestion notification is a changing signal that aims to
   communicate the ratio E/L, where E is the instantaneous excess load
   offered to a resource that it cannot (or would not) serve
   RED active queue management algorithm: packet mode and L is
   the instantaneous offered load.

   The phrase `would not serve' is added, because AQM systems (e.g.
   RED, PCN [I-D.ietf-pcn-marking-behaviour]) use a virtual capacity
   smaller than actual capacity, then notify congestion of this virtual
   capacity in order to avoid congestion of the actual capacity.

   Note that the denominator is offered load, not capacity.  Therefore
   congestion notification is a real number bounded by byte mode.
   Packet mode measured the range [0,1].
   This ties queue length in packets and dropped (or
   marked) individual packets with the most well-understood measure of congestion
   notification: drop fraction (often loosely called loss rate).  It
   also means that congestion has a natural interpretation as a
   probability; the probability independent of offered traffic not being served (or
   being marked as at risk of not being served).  Appendix B describes a
   further incidental benefit that arises from using load as their
   size.  Byte mode measured the
   denominator of congestion notification.

4.  Congestion Measurement

4.1.  Congestion Measurement by Queue Length

   Queue queue length is usually the most correct in bytes and simplest way to measure
   congestion of a resource.  To avoid the pathological effects of drop
   tail, marked an AQM function can then be used
   individual packet with probability in proportion to its size
   (relative to transform queue length into the probability of dropping or marking a maximum packet (e.g.  RED's
   piecewise linear function between thresholds).  If the resource is
   bit-congestible, size).  In the length paper's outline of
   further work, it was stated that no recommendation had been made on
   whether the queue SHOULD be size should be measured in bytes.
   If the resource is packet-congestible, the length of bytes or packets, but
   noted that the queue SHOULD difference could be measured significant.

   When RED was recommended for general deployment in packets.  No other choice makes sense, because 1998 [RFC2309],
   the
   number two modes were mentioned implying the choice between them was a
   question of packets waiting performance, referring to a 1997 email [pktByteEmail] for
   advice on tuning.  This email clarified that there were in the fact two
   orthogonal choices: whether to measure queue isn't relevant if the resource
   gets congested by length in bytes or
   packets (Section 3.1 below) and vice versa.  We discuss whether the implications
   on RED's byte mode and drop probability of an
   individual packet mode for measuring should depend on its own size (Section 3.2 below).

3.1.  Congestion Measurement: Status

   The choice of which metric to use to measure queue length was left
   open in
   Section 6.

4.1.1.  Fixed Size Packet Buffers

   Some, mostly older, queuing hardware sets aside fixed sized buffers RFC2309.  It is now well understood that queues for bit-
   congestible resources should be measured in which to store each packet bytes, and queues for
   packet-congestible resources should be measured in the queue.  Also, with some
   hardware, any fixed sized packets.

   Where buffers not completely filled by a packet are padded when transmitted not configured or legacy buffers cannot be
   configured to the wire.  If above guideline, we imagine do not have to make allowances
   for such legacy in future protocol design.  If a theoretical
   forwarding system with both queuing and transmission bit-congestible
   buffer is measured in fixed, MTU-
   sized units, it should clearly be treated as packet-congestible,
   because packets, the queue length in packets would be a good model of
   congestion of operator will have set the lower layer link.

   If we now imagine
   thresholds mindful of a hybrid forwarding system with transmission delay
   largely dependent on the byte-size typical mix of packets but buffers of one MTU
   per packet, it should strictly require a more complex sizes.  Any AQM
   algorithm to
   determine the probability of congestion.  It should on such a buffer will be treated as two
   resources in sequence, where the sum oversensitive to high proportions
   of the byte-sizes small packets, e.g. a DoS attack, and undersensitive to high
   proportions of the packets
   within each packet large packets.  But an operator can safely keep such a
   legacy buffer models because any undersensitivity during unusual traffic
   mixes cannot lead to congestion of the line while the
   length of collapse given the buffer will
   eventually revert to tail drop, discarding proportionately more large
   packets.

   Some modern queue implementations give a choice for setting RED's
   thresholds in packets models congestion of the queue.  Then
   the probability of congesting the forwarding buffer would byte-mode or packet-mode.  This may merely be a
   conditional probability--conditional on an
   administrator-interface preference, not altering how the previously calculated
   probability of congesting queue itself
   is measured but on some hardware it does actually change the line.

   However, in systems that use fixed size buffers, way it
   measures its queue.  Whether a resource is unusual for
   all bit-congestible or packet-
   congestible is a property of the buffers used by resource, so an interface to admin should not
   ever need to, or be able to, configure the same size.  Typically
   pools way a queue measures
   itself.

   We believe the question of different sized buffers are provided (Cisco uses the term
   'buffer carving' for the process of dividing up memory into whether to measure queues in bytes or
   packets is fairly well understood these
   pools [IOSArch]).  Usually, if days.  The only outstanding
   issues concern how to measure congestion when the pool of small buffers queue is
   exhausted, arriving small packets can borrow space in the pool of
   large buffers, bit
   congestible but not the resource is packet congestible or vice versa.  However, it

   There is easier no controversy over what should be done.  It's just you have
   to be an expert in probability to work out what should be done
   (summarised in the following section) and, even if we temporarily set you have, it's not
   always easy to find a practical algorithm to implement it.

3.1.1.  Fixed Size Packet Buffers

   Some, mostly older, queuing hardware sets aside fixed sized buffers
   in which to store each packet in the possibility of
   such borrowing.  Then, queue.  Also, with some
   hardware, any fixed pools of buffers for different sized packets and no borrowing, buffers not completely filled by a packet
   are padded when transmitted to the size of each pool wire.  If we imagine a theoretical
   forwarding system with both queuing and transmission in fixed, MTU-
   sized units, it should clearly be treated as packet-congestible,
   because the current queue length in each pool packets would both be measured in packets.  So an
   AQM algorithm would have to maintain the queue length for each pool,
   and judge whether to drop/mark a packet good model of
   congestion of the lower layer link.

   If we now imagine a particular size by
   looking at hybrid forwarding system with transmission delay
   largely dependent on the pool for byte-size of packets but buffers of that size one MTU
   per packet, it should strictly require a more complex algorithm to
   determine the probability of congestion.  It should be treated as two
   resources in sequence, where the sum of the byte-sizes of the packets
   within each packet buffer models congestion of the line while the
   length of the queue in packets models congestion of the queue.  Then
   the probability of congesting the forwarding buffer would be a
   conditional probability--conditional on the previously calculated
   probability of congesting the line.

   In systems that use fixed size buffers, it is unusual for all the
   buffers used by an interface to be the same size.  Typically pools of
   different sized buffers are provided (Cisco uses the term 'buffer
   carving' for the process of dividing up memory into these pools
   [IOSArch]).  Usually, if the pool of small buffers is exhausted,
   arriving small packets can borrow space in the pool of large buffers,
   but not vice versa.  However, it is easier to work out what should be
   done if we temporarily set aside the possibility of such borrowing.
   Then, with fixed pools of buffers for different sized packets and no
   borrowing, the size of each pool and the current queue length in each
   pool would both be measured in packets.  So an AQM algorithm would
   have to maintain the queue length for each pool, and judge whether to
   drop/mark a packet of a particular size by looking at the pool for
   packets of that size and using the length (in packets) of its queue.

   We now return to the issue we temporarily set aside: small packets
   borrowing space in larger buffers.  In this case, the only difference
   is that the pools for smaller packets have a maximum queue size that
   includes all the pools for larger packets.  And every time a packet
   takes a larger buffer, the current queue size has to be incremented
   for all queues in the pools of buffers less than or equal to the
   buffer size used.

   We will return to borrowing of fixed sized buffers when we discuss
   biasing the drop/marking probability of a specific packet because of
   its size in Section 6.2.1. 3.2.1.  But here we can give a simple summary of
   the present discussion on how to measure the length of queues of
   fixed buffers: no matter how complicated the scheme is, ultimately
   any fixed buffer system will need to measure its queue length in
   packets not bytes.

4.2.

3.1.2.  Congestion Measurement without a Queue

   AQM algorithms are nearly always described assuming there is a queue
   for a congested resource and the algorithm can use the queue length
   to determine the probability that it will drop or mark each packet.
   But not all congested resources lead to queues.  For instance,
   wireless spectrum is bit-congestible (for a given coding scheme),
   because interference increases with the rate at which bits are
   transmitted.  But wireless link protocols do not always maintain a
   queue that depends on spectrum interference.  Similarly, power
   limited resources are also usually bit-congestible if energy is
   primarily required for transmission rather than header processing,
   but it is rare for a link protocol to build a queue as it approaches
   maximum power.

   However,

   Nonetheless, AQM algorithms don't do not require a queue in order to work.
   For instance spectrum congestion can be modelled by signal quality
   using target bit-energy-to-noise-density ratio.  And, to model radio
   power exhaustion, transmission power levels can be measured and
   compared to the maximum power available.  [ECNFixedWireless] proposes
   a practical and theoretically sound way to combine congestion
   notification for different bit-congestible resources at different
   layers along an end to end path, whether wireless or wired, and
   whether with or without queues.

5.  Idealised Wire Protocol Coding

   We will start by inventing an idealised congestion notification
   protocol before discussing how to make it practical.  The idealised
   protocol is shown to be correct using examples in Appendix A.

3.2.  Congestion notification involves the congested resource coding a
   congestion notification signal into the packet stream and the
   transports decoding it. Coding: Status

3.2.1.  Network Bias when Encoding

   The idealised protocol uses two different
   (imaginary) fields in each datagram previously mentioned email [pktByteEmail] referred to signal congestion: one for
   byte congestion and one for packet congestion.

   We are not saying two ECN fields will be needed (and by
   [RFC2309] gave advice we are not
   saying now disagree with.  It said that somehow a resource should be able to drop a packet in one
   of two different ways so that
   probability should depend on the transport can distinguish which
   sort size of drop it was!).  These two congestion notification channels
   are just a conceptual device.  They allow us to defer having to
   decide whether to distinguish between byte and the packet congestion when being considered
   for drop if the network resource codes the signal or when the transport decodes
   it.

   However, although this idealised mechanism isn't intended for
   implementation, we do want to emphasise that we may need to find a
   way to implement it, because is bit-congestible, but not if it could become necessary to somehow
   distinguish between bit and packet congestion [RFC3714].  Currently a
   design goal of network processing equipment such as routers and
   firewalls is to keep packet processing uncongested even under worst
   case bit rates with minimum packet sizes.  Therefore, packet-
   congestion is currently rare,
   congestible, but there is no guarantee advised that it will
   not become common with future technology trends. most scarce resources in the Internet
   were currently bit-congestible.  The idealised wire protocol is given below.  It accounts for argument continued that if
   packet
   sizes at the transport layer, not in drops were inflated by packet size (byte-mode dropping), "a
   flow's fraction of the network, and packet drops is then only in
   the case a good indication of bit-congestible resources. that
   flow's fraction of the link bandwidth in bits per second".  This avoids was
   consistent with a referenced policing mechanism being worked on at
   the perverse
   incentive to send smaller packets time for detecting unusually high bandwidth flows, eventually
   published in 1999 [pBox].  [The problem could and should have been
   solved by making the DoS vulnerability that
   would otherwise result if policing mechanism count the network were volume of bytes
   randomly dropped, not the number of packets.]

   A few months before RFC2309 was published, an addendum was added to bias towards them (see
   the motivating argument about avoiding perverse incentives above archived email referenced from the RFC, in
   Section 2.2):

   1.  A packet-congestible resource trying which the final
   paragraph seemed to code congestion level p_p
       into partially retract what had previously been said.
   It clarified that the question of whether the probability of
   dropping/marking a packet stream should mark depend on its size was not related
   to whether the idealised `packet
       congestion' field resource itself was bit congestible, but a completely
   orthogonal question.  However the only example given had the queue
   measured in each packets but packet with probability p_p
       irrespective drop depended on the byte-size of the packet's size.  The transport should then
       take a
   packet with in question.  No example was given the packet congestion field marked to mean
       just one mark, irrespective other way round.

   In 2000, Cnodder et al [REDbyte] pointed out that there was an error
   in the part of the packet size.

   2.  A bit-congestible resource trying original 1993 RED algorithm that aimed to code time-varying byte-
       congestion level p_b
   distribute drops uniformly, because it didn't correctly take into a packet stream should mark
   account the `byte
       congestion' field in each adjustment for packet with probability p_b, again
       irrespective of the packet's size.  Unlike before, the transport
       should take  They recommended an
   algorithm called RED_4 to fix this.  But they also recommended a
   further change, RED_5, to adjust drop rate dependent on the square of
   relative packet size.  This was indeed consistent with the one implied
   motivation behind RED's byte congestion field marked mode drop--that we should reverse
   engineer the network to
       count as a mark on each byte in improve the packet.

   The worked examples in Appendix A show that transports can extract
   sufficient and correct performance of dominant end-to-
   end congestion notification from these protocols control mechanisms.

   By 2003, a further change had been made to the adjustment for cases when two flows with different packet sizes have matching
   bit rates or matching
   size, this time in the RED algorithm of the ns2 simulator.  Instead
   of taking each packet's size relative to a `maximum packet rates.  Examples are also given that mix
   these two flows into one size' it
   was taken relative to show that a flow with mixed `mean packet sizes
   would still size', intended to be a static
   value representative of the `typical' packet size on the link.  We
   have not been able to extract sufficient and correct information.

   Sufficient and correct congestion information means that there is
   sufficient information find a justification for the two different types of transport
   requirements:

   Ratio-based:  Established transport congestion controls like TCP's
      [RFC5681] aim to achieve equal segment rates per RTT through the
      same bottleneck--TCP friendliness [RFC3448].  They work with the
      ratio of dropped to delivered segments (or marked to unmarked
      segments this change in the case of ECN).  The example scenarios show
   literature, however Eddy and Allman conducted experiments [REDbias]
   that
      these ratio-based transports are effectively the same whether
      counting in bytes or packets, because the units cancel out.
      (Incidentally, this is why TCP's bit rate is still proportional assessed how sensitive RED was to
      packet size even when byte-counting is used, as recommended for
      TCP in [RFC5681], mainly for orthogonal security reasons.)

   Absolute-target-based:  Other congestion controls proposed in the
      research community aim this parameter, amongst other
   things.  No-one seems to limit the volume of congestion caused have pointed out that this changed algorithm
   can often lead to
      a constant weight parameter.  [MulTCP][WindowPropFair] are
      examples drop probabilities of weighted proportionally fair transports designed for
      cost-fair environments [Rate_fair_Dis].  In this case, the
      transport requires a count (not greater than 1 [which should
   ring alarm bells hinting that there's a ratio) of dropped/marked bytes mistake in the bit-congestible case and theory
   somewhere].  On 10-Nov-2004, this variant of dropped/marked packets byte-mode packet drop
   was made the default in the
      packet congestible case.

6. ns2 simulator.

   The State byte-mode drop variant of the Art

   The original 1993 paper on RED [RED93] proposed two options for is, of course, not the
   RED active queue management algorithm: packet mode and byte mode.
   Packet mode measured the queue length in only
   possible bias towards small packets and dropped (or
   marked) individual in queueing algorithms.  We have
   already mentioned that tail-drop queues naturally tend to lock-out
   large packets once they are full.  But also queues with a probability independent of their
   size.  Byte mode measured fixed sized
   buffers reduce the queue length in bytes and marked an
   individual packet with probability in proportion to its size
   (relative that small packets will be dropped if
   (and only if) they allow small packets to borrow buffers from the maximum packet size).  In the paper's outline of
   further work, it
   pools for larger packets.  As was stated that no recommendation had been made explained in Section 3.1.1 on
   whether fixed
   size buffer carving, borrowing effectively makes the maximum queue
   size should be measured in bytes or packets, but
   noted for small packets greater than that the difference could be significant.

   When RED was recommended for general deployment in 1998 [RFC2309],
   the two modes were mentioned implying large packets, because
   more buffers can be used by small packets while less will fit large
   packets.

   In itself, the choice between them was a
   question of performance, referring to a 1997 email [pktByteEmail] bias towards small packets caused by buffer borrowing
   is perfectly correct.  Lower drop probability for
   advice on tuning.  This email clarified that there were in fact two
   orthogonal choices: whether to measure queue length small packets is
   legitimate in bytes or buffer borrowing schemes, because small packets (Section 6.1 below) and whether
   genuinely congest the drop probability of an
   individual packet should depend on its own size (Section 6.2 below).

6.1.  Congestion Measurement: Status

   The choice of which metric to use to measure queue length was left
   open in RFC2309.  It is now well understood that queues for bit-
   congestible resources should be measured in bytes, and queues for
   packet-congestible resources should be measured machine's buffer memory less than large
   packets, given they can fit in more spaces.  The bias towards small
   packets (see
   Section 4).

   Where buffers are is not configured or legacy buffers cannot be
   configured to the above guideline, we don't have to make allowances
   for such legacy in future protocol design.  If a bit-congestible
   buffer artificially added (as it is measured in packets, the operator will have set RED's byte-mode drop
   algorithm), it merely reflects the
   thresholds mindful of a typical mix reality of the way fixed buffer
   memory gets congested.  Incidentally, the bias towards small packets sizes.  Any AQM
   algorithm on such a
   from buffer will be oversensitive to high proportions borrowing is nothing like as large as that of small packets, e.g. a DoS attack, and undersensitive RED's byte-
   mode drop.

   Nonetheless, fixed-buffer memory with tail drop is still prone to high
   proportions of
   lock-out large packets.  But an operator can safely keep such packets, purely because of the tail-drop aspect.  So a
   legacy
   good AQM algorithm like RED with packet-mode drop should be used with
   fixed buffer because any undersensitivity during unusual traffic
   mixes cannot lead memories where possible.  If RED is too complicated to congestion collapse given the
   implement with multiple fixed buffer will
   eventually revert pools, the minimum necessary to tail drop, discarding proportionately more
   prevent large
   packets.

   Some modern queue implementations give a choice for setting RED's
   thresholds in byte-mode or packet-mode.  This may merely be an
   administrator-interface preference, not altering how the queue itself packet lock-out is measured but on some hardware it does actually change to ensure smaller packets never use
   the way it
   measures its queue.  Whether a resource is bit-congestible or packet-
   congestible is a property last available buffer in any of the resource, so an admin SHOULD NOT
   ever need to, or be able to, configure pools for larger packets.

3.2.2.  Transport Bias when Decoding

   The above proposals to alter the way network equipment to bias towards
   smaller packets have largely carried on outside the IETF process
   (unless one counts a queue measures
   itself.

   We believe reference in an informational RFC to an archived
   email!).  Whereas, within the question of whether IETF, there are many different
   proposals to measure queues in bytes or
   packets is fairly well understood these days.  The only outstanding
   issues concern how alter transport protocols to measure congestion when achieve the queue is bit
   congestible but same goals,
   i.e. either to make the resource is flow bit-rate take account of packet congestible size, or vice versa (see
   Section 4).  But there is no controversy over what should be done.
   It's just you have
   to be an expert in probability protect control packets from loss.  This memo argues that altering
   transport protocols is the more principled approach.

   A recently approved experimental RFC adapts its transport layer
   protocol to work out what
   should be done and, even if you have, it's not always easy take account of packet sizes relative to find typical TCP
   packet sizes.  This proposes a
   practical algorithm to implement it.

6.2.  Congestion Coding: Status

6.2.1.  Network Bias when Encoding

   The previously mentioned email [pktByteEmail] referred to by
   [RFC2309] said new small-packet variant of TCP-
   friendly rate control [RFC3448] called TFRC-SP [RFC4828].
   Essentially, it proposes a rate equation that inflates the choice over whether flow rate
   by the ratio of a packet's own typical TCP segment size
   should affect its drop probability "depends on (1500B including TCP
   header) over the dominant end-to-
   end congestion control mechanisms".  [Section 2 argues against actual segment size [PktSizeEquCC].  (There are also
   other important differences of detail relative to TFRC, such as using
   virtual packets [CCvarPktSize] to avoid responding to multiple losses
   per round trip and using a minimum inter-packet interval.)

   Section 4.5.1 of this
   approach, citing TFRC-SP spec discusses the excellent advice implications of
   operating in RFC3246.]  The referenced
   email went on an environment where queues have been configured to argue that drop
   smaller packets with proportionately lower probability should depend on the
   size of the packet being considered for drop if the resource is bit-
   congestible, but not if than larger
   ones.  But it is packet-congestible, but advised that
   most scarce resources only discusses TCP operating in such an environment,
   only mentioning TFRC-SP briefly when discussing how to define
   fairness with TCP.  And it only discusses the Internet were currently bit-congestible.
   The argument continued that if packet drops were inflated by packet
   size (byte-mode dropping), "a flow's fraction byte-mode dropping
   version of RED as it was before Cnodder et al pointed out it didn't
   sufficiently bias towards small packets to make TCP independent of the
   packet drops is
   then a good indication size.

   So the TFRC-SP spec doesn't address the issue of that flow's fraction which of the link bandwidth
   in bits per second".  This was consistent with a referenced policing
   mechanism being worked on at the time for detecting unusually high
   bandwidth flows, eventually published in 1999 [pBox].  [The problem
   could have been solved by making the policing mechanism count network
   or the
   volume of bytes randomly dropped, not transport _should_ handle fairness between different packet
   sizes.  In its Appendix B.4 it discusses the number possibility of packets.]

   A few months before RFC2309 was published, an addendum was added both
   TFRC-SP and some network buffers duplicating each other's attempts to
   deliberately bias towards small packets.  But the above archived email referenced from discussion is not
   conclusive, instead reporting simulations of many of the RFC,
   possibilities in which the final
   paragraph seemed order to partially retract what had previously been said.
   It clarified assess performance but not recommending any
   particular course of action.

   The paper originally proposing TFRC with virtual packets (VP-TFRC)
   [CCvarPktSize] proposed that there should perhaps be two variants to
   cater for the question different variants of whether RED.  However, as the probability of
   dropping/marking TFRC-SP
   authors point out, there is no way for a packet should depend on its size was not related transport to know whether the resource itself was bit congestible, but a completely
   orthogonal question.  However the only example given had the queue
   measured in packets but
   some queues on its path have deployed RED with byte-mode packet drop depended on the byte-size of
   (except if an exhaustive survey found that no-one has deployed it!--
   see Section 3.2.4).  Incidentally, VP-TFRC also proposed that byte-
   mode RED dropping should really square the packet in question.  No example was given the other way round.

   In 2000, Cnodder et al [REDbyte] pointed out size compensation
   factor (like that there was an error
   in the part of the original 1993 RED algorithm that aimed RED_5, but apparently unaware of it).

   Pre-congestion notification [I-D.ietf-pcn] is a proposal to
   distribute drops uniformly, because it didn't correctly take into
   account the adjustment use a
   virtual queue for packet size.  They recommended an
   algorithm called RED_4 AQM marking for packets within one Diffserv class
   in order to fix this.  But they also recommended a
   further change, RED_5, give early warning prior to adjust drop rate dependent on the square any real queuing.  The
   proposed PCN marking algorithms have been designed not to take
   account of
   relative packet size.  This was indeed consistent with one stated
   motivation behind RED's byte mode drop--that we should reverse
   engineer the network to improve size when forwarding through queues.  Instead the performance of dominant end-to-
   end congestion control mechanisms.

   By 2003, a further change had
   general principle has been made to the adjustment for packet
   size, this time in the RED algorithm take account of the ns2 simulator.  Instead sizes of taking each packet's size relative to a `maximum packet size' it
   was taken relative to a `mean packet size', intended to be a static
   value representative marked
   packets when monitoring the fraction of marking at the `typical' packet size on edge of the link.  We
   network.

3.2.3.  Making Transports Robust against Control Packet Losses

   Recently, two RFCs have not been able defined changes to find a justification for this change in the
   literature, however Eddy and Allman conducted experiments [REDbias] TCP that assessed how sensitive RED was to this parameter, amongst other
   things.  No-one seems to have pointed out that this changed algorithm
   can often lead to drop probabilities of greater than 1 [which should
   ring alarm bells hinting that there's a mistake in the theory
   somewhere].  On 10-Nov-2004, this variant of byte-mode packet drop
   was made the default in the ns2 simulator.

   The byte-mode drop variant of RED is, of course, not the only
   possible bias towards make it more
   robust against losing small control packets in queueing algorithms.  We have
   already mentioned that tail-drop queues naturally tend to lock-out
   large packets once [RFC5562] [RFC5690].  In
   both cases they are full.  But also queues with fixed sized
   buffers reduce the probability note that small packets will the case for these TCP changes would be dropped
   weaker if
   (and only if) they allow RED were biased against dropping small packets to borrow buffers from the
   pools for larger packets.  As was explained in Section 4.1.1 on fixed
   size buffer carving, borrowing effectively makes the maximum queue
   size for small packets greater than  We argue
   here that for large packets, because these two proposals are a safer and more buffers can principled way to
   achieve TCP performance improvements than reverse engineering RED to
   benefit TCP.

   Although no proposals exist as far as we know, it would also be used by small packets while less will fit large
   packets.

   However, in itself, the bias towards small
   possible and perfectly valid to make control packets caused robust against
   drop by buffer
   borrowing is perfectly correct.  Lower explicitly requesting a lower drop probability for small
   packets is legitimate in buffer borrowing schemes, because small
   packets genuinely congest the machine's buffer memory less than large
   packets, given they can fit in more spaces.  The bias towards small
   packets is not artificially added (as it is in RED's byte-mode drop
   algorithm), it merely reflects the reality of the way fixed buffer
   memory gets congested.  Incidentally, the bias towards small packets
   from buffer borrowing is nothing like as large as that of RED's byte-
   mode drop.

   Nonetheless, fixed-buffer memory with tail drop is still prone using their
   Diffserv code point [RFC2474] to
   lock-out large packets, purely because of the tail-drop aspect.  So request a
   good AQM algorithm like RED with packet-mode drop should be used scheduling class with
   fixed buffer memories where possible.  If RED
   lower drop.

   The re-ECN protocol proposal [I-D.briscoe-tsvwg-re-ecn-tcp] is too complicated to
   implement with multiple fixed buffer pools, the minimum necessary
   designed so that transports can be made more robust against losing
   control packets.  It gives queues an incentive to
   prevent large packet lock-out is optionally give
   preference against drop to ensure smaller packets never use with the last available buffer 'feedback not
   established' codepoint in any of the pools for larger packets.

6.2.2.  Transport Bias when Decoding

   The above proposals proposed 'extended ECN' field.  Senders
   have incentives to alter use this codepoint sparingly, but they can use it
   on control packets to reduce their chance of being dropped.  For
   instance, the network layer proposed modification to give a bias towards
   smaller packets have largely carried TCP for re-ECN uses this
   codepoint on outside the IETF process
   (unless one counts a reference in an informational RFC SYN and SYN-ACK.

   Although not brought to an archived
   email!).  Whereas, within the IETF, there are many different
   proposals to alter transport protocols to achieve the same goals,
   i.e. either to make the flow bit-rate take account of packet size, or
   to protect control packets a simple proposal from loss.  This memo argues Wischik
   [DupTCP] suggests that altering
   transport protocols is the more principled approach.

   A recently approved experimental RFC adapts its transport layer
   protocol to take account first three packets of packet sizes relative to typical every TCP
   packet sizes.  This proposes flow
   should be routinely duplicated after a new small-packet variant short delay.  It shows that
   this would greatly improve the chances of TCP-
   friendly rate control [RFC3448] called TFRC-SP [RFC4828].
   Essentially, short flows completing
   quickly, but it proposes a rate equation that inflates would hardly increase traffic levels on the flow rate
   by Internet,
   because Internet bytes have always been concentrated in the ratio large
   flows.  It further shows that the performance of a many typical TCP segment size (1500B including TCP
   header) over the actual segment size [PktSizeEquCC].  (There are also
   other important differences
   applications depends on completion of detail relative to TFRC, such as using
   virtual packets [CCvarPktSize] to avoid responding to multiple losses
   per round trip and using a minimum inter-packet interval.)

   Section 4.5.1 long serial chains of short
   messages.  It argues that, given most of the value people get from
   the Internet is concentrated within short flows, this TFRC-SP spec discusses simple
   expedient would greatly increase the implications value of
   operating in an environment where queues have been configured to drop
   smaller packets with proportionately lower probability than larger
   ones.  But it only discusses TCP operating in such an environment,
   only mentioning the best efforts
   Internet at minimal cost.

3.2.4.  Congestion Coding: Summary of Status

   +-----------+----------------+-----------------+--------------------+
   | transport |  RED_1 (packet |  RED_4 (linear  | RED_5 (square byte |
   |        cc |   mode drop)   | byte mode drop) |     mode drop)     |
   +-----------+----------------+-----------------+--------------------+
   |    TCP or |    s/sqrt(p)   |    sqrt(s/p)    |      1/sqrt(p)     |
   |      TFRC |                |                 |                    |
   |   TFRC-SP briefly |    1/sqrt(p)   |    1/sqrt(sp)   |    1/(s.sqrt(p))   |
   +-----------+----------------+-----------------+--------------------+

     Table 1: Dependence of flow bit-rate per RTT on packet size s and
   drop rate p when discussing how network and/or transport bias towards small packets
                            to define
   fairness with TCP.  And it only discusses varying degrees

   Table 1 aims to summarise the byte-mode dropping
   version positions we may now be in.  Each
   column shows a different possible AQM behaviour in different queues
   in the network, using the terminology of RED as it was before Cnodder et al pointed out it didn't
   sufficiently bias towards small packets to make outlined
   earlier (RED_1 is basic RED with packet-mode drop).  Each row shows a
   different transport behaviour: TCP independent of
   packet size.

   So [RFC5681] and TFRC [RFC3448] on
   the top row with TFRC-SP spec doesn't address [RFC4828] below.  Suppressing all
   inessential details the issue of which of table shows that independence from packet
   size should either be achievable by not altering the network TCP transport in
   a RED_5 network, or using the transport _should_ handle fairness between different small packet
   sizes.  In its Appendix B.4 it discusses the possibility of both TFRC-SP and some transport in a
   network buffers duplicating each other's attempts to
   deliberately bias towards small packets.  But the discussion is not
   conclusive, instead reporting simulations of many of without any byte-mode dropping RED (top right and bottom
   left).  Top left is the
   possibilities `do nothing' scenario, while bottom right is
   the `do-both' scenario in order to assess performance but not recommending which bit-rate would become far too biased
   towards small packets.  Of course, if any
   particular course form of action.

   The paper originally proposing TFRC with virtual packets (VP-TFRC)
   [CCvarPktSize] proposed that there should perhaps be two variants to
   cater for the different variants byte-mode dropping
   RED has been deployed on a selection of RED.  However, as the TFRC-SP
   authors point out, there is no way for congested queues, each path
   will present a transport different hybrid scenario to know whether
   some queues on its path have deployed RED with transport.

   Whatever, we can see that the linear byte-mode packet drop
   (except if an exhaustive survey found that no-one has deployed it!--
   see Section 6.2.4).  Incidentally, VP-TFRC also proposed column in the
   middle considerably complicates the Internet.  It's a half-way house
   that byte-
   mode RED dropping doesn't bias enough towards small packets even if one believes
   the network should really square be doing the packet size compensation
   factor (like biasing.  We argue below that of RED_5, but apparently unaware of it).

   Pre-congestion notification [I-D.ietf-pcn-marking-behaviour] is a
   proposal to use a virtual queue for AQM marking for _all_
   network layer bias towards small packets within
   one Diffserv class in order to give early warning prior to should be turned off--if
   indeed any real
   queuing.  The proposed PCN marking algorithms equipment vendors have been designed not
   to take account of implemented it--leaving packet size when forwarding through queues.
   Instead
   bias solely as the general principle preserve of the transport layer (solely the
   leftmost, packet-mode drop column).

   A survey has been to take account conducted of 84 vendors to assess how widely drop
   probability based on packet size has been implemented in RED.  Prior
   to the sizes
   of marked packets when monitoring survey, an individual approach to Cisco received confirmation
   that, having checked the fraction code-base for each of marking at the edge product ranges,
   Cisco has not implemented any discrimination based on packet size in
   any AQM algorithm in any of the network.

6.2.3.  Making Transports Robust against Control Packet Losses

   Recently, two drafts have proposed changes its products.  Also an individual
   approach to TCP Alcatel-Lucent drew a confirmation that make it more
   robust against losing small control packets [I-D.ietf-tcpm-ecnsyn]
   [I-D.floyd-tcpm-ackcc].  In both cases they note was very
   likely that the case for
   these TCP changes would be weaker if none of their products contained RED were biased against dropping
   small packets.  We argue here code that these two proposals are a safer
   and more principled way to achieve TCP performance improvements than
   reverse engineering RED
   implemented any packet-size bias.

   Turning to benefit TCP. our more formal survey (Table 2), about 19% of those
   surveyed have replied so far, giving a sample size of 16.  Although no proposals exist as far as
   we know, it would also be
   possible and perfectly valid to make control packets robust against
   drop by explicitly requesting a lower drop probability using their
   Diffserv code point [RFC2474] do not have permission to request a scheduling class with
   lower drop.

   The re-ECN protocol proposal [I-D.briscoe-tsvwg-re-ecn-tcp] is
   designed so that transports identify the respondents, we can be made more robust against losing
   control packets.  It gives queues an incentive to optionally give
   preference against drop to packets with the 'feedback not
   established' codepoint in the proposed 'extended ECN' field.  Senders say
   that those that have incentives to use this codepoint sparingly, but they can use it
   on control packets to reduce their chance responded include most of being dropped.  For
   instance, the proposed modification to TCP for re-ECN uses this
   codepoint on the SYN and SYN-ACK.

   Although not brought to the IETF, a simple proposal from Wischik
   [DupTCP] suggests that the first three packets of every TCP flow
   should be routinely duplicated after larger vendors,
   covering a short delay.  It shows that
   this would greatly improve the chances large fraction of short flows completing
   quickly, but it would hardly increase traffic levels on the Internet,
   because Internet bytes have always been concentrated in market.  They range across the large
   flows.  It further shows
   network equipment vendors at L3 & L2, firewall vendors, wireless
   equipment vendors, as well as large software businesses with a small
   selection of networking products.  So far, all those who have
   responded have confirmed that they have not implemented the performance variant
   of many typical
   applications depends RED with drop dependent on completion of long serial chains of short
   messages.  It argues that, given most packet size (2 were fairly sure they
   had not but needed to check more thoroughly).

   +-------------------------------+----------------+-----------------+
   |                      Response | No. of the value people get from
   the Internet is concentrated within short flows, this simple
   expedient would greatly increase the value vendors | %age of the best efforts
   Internet at minimal cost.

6.2.4.  Congestion Coding: Summary of Status

   +-----------+----------------+-----------------+--------------------+
   | transport |  RED_1 (packet |  RED_4 (linear  | RED_5 (square byte vendors |
   +-------------------------------+----------------+-----------------+
   |        cc               Not implemented |   mode drop)             14 | byte mode drop)             17% |     mode drop)
   |
   +-----------+----------------+-----------------+--------------------+    Not implemented (probably) |    TCP or              2 |    s/sqrt(p)              2% |    sqrt(s/p)
   |      1/sqrt(p)                   Implemented |              0 |      TFRC              0% |
   |                   No response |             68 |             81% |   TFRC-SP
   |    1/sqrt(p) Total companies/orgs surveyed |    1/sqrt(sp)             84 |    1/(s.sqrt(p))            100% |
   +-----------+----------------+-----------------+--------------------+
   +-------------------------------+----------------+-----------------+

    Table 1: Dependence of flow bit-rate per RTT 2: Vendor Survey on packet size s and byte-mode drop rate p when network and/or transport bias towards variant of RED (lower drop
                      probability for small packets
                            to varying degrees

   Table 1 aims to summarise packets)

   Where reasons have been given, the positions we may now be in.  Each
   column shows extra complexity of packet bias
   code has been most prevalent, though one vendor had a different possible AQM behaviour in different queues
   in the network, using more principled
   reason for avoiding it--similar to the terminology argument of Cnodder et al outlined
   earlier (RED_1 is basic this document.  We
   have established that Linux does not implement RED with packet-mode drop).  Each row shows a
   different transport behaviour: TCP [RFC5681] and TFRC [RFC3448] on
   the top row with TFRC-SP [RFC4828] below.  Suppressing all
   inessential details the table shows that independence from packet size should either be achievable by
   drop bias, although we have not altering the TCP transport in
   a RED_5 network, or using the small packet TFRC-SP transport in investigated a
   network without any byte-mode dropping RED (top right and bottom
   left).  Top left wider range of open
   source code.

   Finally, we repeat that RED's byte mode drop is not the `do nothing' scenario, while bottom right is
   the `do-both' scenario in which bit-rate would become far too biased only way to
   bias towards small packets.  Of course, if any form packets--tail-drop tends to lock-out large packets
   very effectively.  Our survey was of byte-mode dropping
   RED vendor implementations, so we
   cannot be certain about operator deployment.  But we believe many
   queues in the Internet are still tail-drop.  The company of one of
   the co-authors (BT) has been widely deployed on a selection of congested RED, but there are bound to
   be many tail-drop queues, each path
   will present particularly in access network equipment
   and on middleboxes like firewalls, where RED is not always available.

   Routers using a different hybrid scenario to its transport.

   Whatever, we can see that the linear byte-mode drop column memory architecture based on fixed size buffers with
   borrowing may also still be prevalent in the
   middle considerably complicates the Internet.  It's  As explained
   in Section 3.2.1, these also provide a half-way house
   that doesn't marginal (but legitimate) bias enough
   towards small packets packets.  So even if one believes
   the network should be doing the biasing.  We argue below that _all_
   network layer though RED byte-mode drop is not
   prevalent, it is likely there is still some bias towards small
   packets in the Internet due to tail drop and fixed buffer borrowing.

4.  Outstanding Issues and Next Steps

4.1.  Bit-congestible World

   For a connectionless network with nearly all resources being bit-
   congestible we believe the recommended position is now unarguably
   clear--that the network should be turned off--if
   indeed any equipment vendors have implemented it--leaving not make allowance for packet size
   bias solely as the preserve of sizes
   and the transport layer (solely the
   leftmost, packet-mode drop column).

   A survey has been conducted of 84 vendors should.  This leaves two outstanding issues:

   o  How to assess how widely handle any legacy of AQM with byte-mode drop
   probability based on packet size has been implemented in RED.  Prior already
      deployed;

   o  The need to the survey, an individual approach start a programme to Cisco received confirmation
   that, having checked the code-base for each update transport congestion
      control protocol standards to take account of the product ranges,
   Cisco has not implemented any discrimination based on packet size in
   any AQM algorithm in any size.

   The sample of its products.  Also an individual
   approach to Alcatel-Lucent drew a confirmation returns from our vendor survey Section 3.2.4 suggest
   that byte-mode packet drop seems not to be implemented at all let
   alone deployed, or if it was very is, it is likely that none of their products contained RED code that
   implemented any packet-size bias.

   Turning to our more formal survey (Table 2), about 19% of those
   surveyed have replied so far, giving a sample size of 16.  Although be very sparse.
   Therefore, we do not have permission really need a migration strategy from all but
   nothing to identify the respondents, we can say
   that those that have responded include most nothing.

   A programme of the larger vendors,
   covering a large fraction standards updates to take account of packet size in
   transport congestion control protocols has started with TFRC-SP
   [RFC4828], while weighted TCPs implemented in the market.  They range across research community
   [WindowPropFair] could form the large
   network equipment vendors at L3 & L2, firewall vendors, wireless
   equipment vendors, as well as large software businesses basis of a future change to TCP
   congestion control [RFC5681] itself.

4.2.  Bit- & Packet-congestible World

   Nonetheless, a connectionless network with both bit-congestible and
   packet-congestible resources is a small
   selection different matter.  If we believe we
   should allow for this possibility in the future, this space contains
   a truly open research issue.

   We develop the concept of networking products.  So far, all those who have
   responded have confirmed an idealised congestion notification
   protocol that they have not implemented the variant supports both bit-congestible and packet-congestible
   resources in Appendix B.  The congestion notification requires at
   least two flags for congestion of RED bit-congestible and packet-
   congestible resources.  This hides a fundamental problem--much more
   fundamental than whether we can magically create header space for yet
   another ECN flag in IPv4, or whether it would work while being
   deployed incrementally.  A congestion notification protocol must
   survive a transition from low levels of congestion to high.  Marking
   two states is feasible with explicit marking, but much harder if
   packets are dropped.  Also, it will not always be cost-effective to
   implement AQM at every low level resource, so drop dependent on will often have to
   suffice.  Distinguishing drop from delivery naturally provides just
   one congestion flag--it is hard to drop a packet size (2 in two ways that are fairly sure they
   haven't but need
   distinguishable remotely.  This is a similar problem to check more thoroughly).

   +-------------------------------+----------------+-----------------+
   |                      Response | No. that of vendors | %age
   distinguishing wireless transmission losses from congestive losses.

   We should also note that, strictly, packet-congestible resources are
   actually cycle-congestible because load also depends on the
   complexity of vendors |
   +-------------------------------+----------------+-----------------+
   |               Not implemented |             14 |             17% |
   |    Not implemented (probably) |              2 |              2% |
   |                   Implemented |              0 |              0% |
   |                   No response |             68 |             81% |
   | Total companies/orgs surveyed |             84 |            100% |
   +-------------------------------+----------------+-----------------+

    Table 2: Vendor Survey on byte-mode drop variant of RED (lower drop
                      probability for small packets)

   Where reasons have been given, each look-up and whether the extra complexity pattern of packet bias
   code has been most prevalent, though one vendor had a more principled
   reason for avoiding it--similar arrivals is
   amenable to the argument of caching or not.  Further, this document.  We
   have established reminds us that Linux does not implement RED with packet size
   drop bias, although we have any
   solution must not investigated require a wider range of open
   source code.

   Finally, we repeat that RED's byte mode drop is not the only way to
   bias towards small packets--tail-drop tends forwarding engine to lock-out large packets
   very effectively.  Our survey was of vendor implementations, so we
   cannot be certain about operator deployment.  But we believe many
   queues use excessive
   processor cycles in order to decide how to say it has no spare
   processor cycles.

   Recently, the Internet are still tail-drop.  My own company (BT) dual resource queue (DRQ) proposal [DRQ] has
   widely deployed RED, but there are bound to be many tail-drop queues,
   particularly been made
   on the premise that, as network processors become more cost
   effective, per packet operations will become more complex
   (irrespective of whether more function in access the network equipment and on middleboxes like
   firewalls, where RED layer is not always available.  Routers using a memory
   architecture based on fixed size buffers with borrowing may also
   still be prevalent in
   desirable).  Consequently the Internet.  As explained in Section 6.2.1,
   these also provide premise is that CPU congestion will
   become more common.  DRQ is a marginal (but legitimate) bias towards small
   packets.  So even though proposed modification to the RED byte-mode drop
   algorithm that folds both bit congestion and packet congestion into
   one signal (either loss or ECN).

   The problem of signalling packet processing congestion is not prevalent, it is
   likely there is still some bias towards small packets in the
   pressing, as most Internet
   due to tail drop and fixed buffer borrowing.

7.  Outstanding Issues and Next Steps

7.1.  Bit-congestible World

   For a connectionless network with nearly all resources being are designed to be bit-
   congestible we believe the recommended position is now unarguably
   clear--that the network should not make allowance for before packet sizes
   and the transport should.  This leaves two outstanding issues:

   o  How to handle any legacy of AQM with byte-mode drop already
      deployed;

   o  The need to start a programme processing starts to update transport congest (see
   Section 1.1).  However, the IRTF Internet congestion control protocol standards to take account of packet size.

   The sample research
   group (ICCRG) has set itself the task of returns from our vendor survey Section 6.2.4 suggest reaching consensus on
   generic forwarding mechanisms that byte-mode packet drop seems not are necessary and sufficient to be implemented
   support the Internet's future congestion control requirements (the
   first challenge in [I-D.irtf-iccrg-welzl]).  Therefore, rather than
   not giving this problem any thought at all let
   alone deployed, or if it is, all, just because it is likely to be very sparse.
   Therefore, hard
   and currently hypothetical, we do not really need a migration strategy from all but
   nothing to nothing.

   A programme of standards updates to take account defer the question of whether packet size in
   transport
   congestion control protocols has started with TFRC-SP
   [RFC4828], while weighted TCPs implemented in the research community
   [WindowPropFair] could form the basis of a future change might become common and what to TCP
   congestion control [RFC5681] itself.

7.2.  Bit- & Packet-congestible World

   Nonetheless, a connectionless network with both bit-congestible do if it does to the IRTF
   (the 'Small Packets' challenge in [I-D.irtf-iccrg-welzl]).

5.  Recommendation and
   packet-congestible resources Conclusions

5.1.  Recommendation on Queue Measurement

   Queue length is a different matter.  If we believe we
   should allow for this possibility in usually the future, this space contains
   a truly open research issue.

   The idealised wire protocol coding described in Section 5 requires at
   least two flags for most correct and simplest way to measure
   congestion of bit-congestible and packet-
   congestible resources.  This hides a fundamental problem--much more
   fundamental than whether we resource.  To avoid the pathological effects of drop
   tail, an AQM function can magically create header space for yet
   another ECN flag in IPv4, then be used to transform queue length into
   the probability of dropping or whether it would work while being
   deployed incrementally.  A congestion notification protocol must
   survive marking a transition from low levels of congestion to high.  Marking
   two states packet (e.g.  RED's
   piecewise linear function between thresholds).

   If the resource is feasible with explicit marking, but much harder if
   packets are dropped.  Also, it will not always bit-congestible, the length of the queue SHOULD be cost-effective to
   implement AQM at every low level resource, so drop will often have to
   suffice.  Distinguishing drop from delivery naturally provides just
   one congestion flag--it is hard to drop a packet
   measured in two ways that are
   distinguishable remotely.  This bytes.  If the resource is a similar problem to that of
   distinguishing wireless transmission losses from congestive losses.

   We should also note that, strictly, packet-congestible resources are
   actually cycle-congestible because load also depends on packet-congestible, the
   complexity length
   of each look-up and whether the pattern queue SHOULD be measured in packets.  No other choice makes
   sense, because the number of arrivals is
   amenable to caching or not.  Further, this reminds us that any
   solution must not require a forwarding engine to use excessive
   processor cycles packets waiting in order to decide how to say it has no spare
   processor cycles.

   Recently, the dual resource queue (DRQ) proposal [DRQ] has been made
   on isn't
   relevant if the premise that, as network processors become more cost
   effective, per resource gets congested by bytes and vice versa.  We
   discuss the implications on RED's byte mode and packet operations will become more complex
   (irrespective of whether more function mode for
   measuring queue length in the network layer is
   desirable).  Consequently the premise is Section 3.

   NOTE WELL that CPU congestion will
   become more common.  DRQ RED's byte-mode queue measurement is a proposed modification fine, being
   completely orthogonal to the byte-mode drop.  If a RED
   algorithm that folds both bit congestion and packet congestion into
   one signal (either loss or ECN).

   The problem of signalling packet processing congestion is implementation has
   a byte-mode but does not
   pressing, as specify what sort of byte-mode, it is most Internet resources are designed to be bit-
   congestible before packet processing starts to congest.
   probably byte-mode queue measurement, which is fine.  However, if in
   doubt, the
   IRTF Internet congestion control research group (ICCRG) has set
   itself the task of reaching consensus vendor should be consulted.

5.2.  Recommendation on generic forwarding
   mechanisms Notifying Congestion

   The strong recommendation is that are necessary and sufficient to support AQM algorithms such as RED SHOULD
   NOT use byte-mode drop.  More generally, the Internet's future congestion control requirements (the first
   challenge in
   [I-D.irtf-iccrg-welzl-congestion-control-open-research]).  Therefore,
   rather than not giving this problem any thought at all, just because
   it is hard and currently hypothetical, we defer the question
   notification protocols (drop, ECN & PCN) SHOULD take account of
   whether
   packet congestion might become common and what to do if size when the notification is read by the transport layer, NOT
   when it
   does to is written by the IRTF (the 'Small Packets' challenge in
   [I-D.irtf-iccrg-welzl-congestion-control-open-research]).

8.  Security Considerations network layer.  This draft recommends approach offers
   sufficient and correct congestion information for all known and
   future transport protocols and also ensures no perverse incentives
   are created that queues do not bias would encourage transports to use inappropriately
   small packet sizes.

   The alternative of deflating RED's drop probability
   towards small packets as this for smaller
   packet sizes (byte-mode drop) has no enduring advantages.  It is more
   complex, it creates a the perverse incentive for
   transports to break down their flows fragment segments into
   tiny segments.  One of pieces and it reopens the
   benefits of implementing AQM was meant to be vulnerability to remove this perverse
   incentive floods of small-
   packets that drop-tail queues gave suffered from and AQM was designed to small packets.  Of course, if
   transports really want
   remove.

   Byte-mode drop is a change to make the greatest gains, they don't have to
   respond to congestion anyway.  But we don't want applications network layer that
   are trying makes allowance
   for an omission from the design of TCP, effectively reverse
   engineering the network layer to behave contrive to discover that they can go faster make two TCPs with
   different packet sizes run at equal bit rates (rather than packet
   rates) under the same path conditions.

   It also improves TCP performance by using
   smaller packets.

   In practice, transports cannot all reducing the chance that a SYN or
   a pure ACK will be trusted dropped, because they are small.  But we SHOULD
   NOT hack the network layer to respond improve or fix certain transport
   protocols.  No matter how predominant a transport protocol is (even
   if it's TCP), trying to
   congestion.  So another reason correct for recommending that queues do not
   bias drop probability its failings by biasing towards
   small packets is to avoid in the
   vulnerability network layer creates a perverse incentive to small packet DDoS attacks that would otherwise
   result.  One
   break down all flows from all transports into tiny segments.

   So far, our survey of 84 vendors across the benefits industry has drawn
   responses from about 19%, none of implementing AQM was meant to be to
   remove drop-tail's DoS vulnerability to small packets, so we
   shouldn't add it back again.

   If most queues whom have implemented AQM with byte-mode drop, the resulting
   network would amplify the potency of a small byte mode
   packet DDoS attack.  At
   the first queue the stream drop variant of packets would push aside a greater
   proportion of large packets, so more of the small packets would
   survive RED.  Given there appears to attack the next queue.  Thus a flood of small packets
   would continue on towards the destination, pushing regular traffic
   with large packets out be little, if
   any, installed base it seems we can recommend removal of the way in one queue after the next, but
   suffering much less byte-mode
   drop itself.

   Appendix C explains why the ability of networks to police the
   response of _any_ transport to congestion depends on bit-congestible
   network resources only doing packet-mode not from RED with little, if any, incremental deployment impact.

   If a vendor has implemented byte-mode drop.  In
   summary, drop, and an operator has
   turned it says that making drop probability depend on the size of
   the packets on, it is strongly RECOMMENDED that bits happen to be divided into simply encourages the
   bits to it SHOULD be divided into smaller packets.  Byte-mode drop would
   therefore irreversibly complicate any attempt to fix the Internet's
   incentive structures.

9.  Conclusions

   The strong conclusion is turned
   off.  Note that AQM algorithms such as RED as a whole SHOULD NOT
   use be turned off, as without
   it, a drop tail queue also biases against large packets.  But note
   also that turning off byte-mode drop.  More generally, may alter the Internet's congestion
   notification protocols (drop, ECN & PCN) SHOULD take account relative performance of
   applications using different packet size when sizes, so it would be advisable
   to establish the implications before turning it off.

5.3.  Recommendation on Responding to Congestion

   Instead of network equipment biasing its congestion notification is read by for
   small packets, the IETF transport layer, NOT
   when it is written by the network layer.  This approach offers
   sufficient and correct area should continue its programme
   of updating congestion information for all known and
   future transport control protocols to take account of packet
   size and also ensures no perverse incentives
   are created that would encourage to make transports less sensitive to use inappropriately
   small packet sizes. losing control packets
   like SYNs and pure ACKS.

5.4.  Recommended Future Research

   The alternative of deflating RED's drop probability above conclusions cater for smaller
   packet sizes (byte-mode drop) has no enduring advantages.  It the Internet as it is today with
   most, if not all, resources being primarily bit-congestible.  A
   secondary conclusion of this memo is that we may see more
   complex, packet-
   congestible resources in the future, so research may be needed to
   extend the Internet's congestion notification (drop or ECN) so that
   it can handle a mix of bit-congestible and packet-congestible
   resources.

6.  Security Considerations

   This draft recommends that queues do not bias drop probability
   towards small packets as this creates the a perverse incentive for
   transports to fragment segments break down their flows into tiny pieces and it reopens segments.  One of the vulnerability to floods
   benefits of small-
   packets that drop-tail queues suffered from and implementing AQM was designed meant to
   remove.  Byte-mode drop is a change be to the network layer remove this perverse
   incentive that makes
   allowance for an omission from the design of TCP, effectively reverse
   engineering the network layer drop-tail queues gave to contrive small packets.  Of course, if
   transports really want to make two TCPs with
   different packet sizes run at equal bit rates (rather than packet
   rates) under the same path conditions.  It also improves TCP
   performance by reducing the chance that a SYN or a pure ACK will be
   dropped, because greatest gains, they are small. don't have to
   respond to congestion anyway.  But we SHOULD NOT hack the network
   layer to improve or fix certain transport protocols.  No matter how
   predominant a transport protocol is (even if it's TCP), don't want applications that
   are trying to
   correct for its failings behave to discover that they can go faster by biasing using
   smaller packets.

   In practice, transports cannot all be trusted to respond to
   congestion.  So another reason for recommending that queues do not
   bias drop probability towards small packets in the
   network layer creates a perverse incentive is to break down all flows
   from all transports into tiny segments.

   So far, our survey of 84 vendors across avoid the industry has drawn
   responses from about 19%, none
   vulnerability to small packet DDoS attacks that would otherwise
   result.  One of whom have implemented the byte mode
   packet drop variant benefits of RED.  Given there appears implementing AQM was meant to be little, if
   any, installed base it seems to
   remove drop-tail's DoS vulnerability to small packets, so we can recommend removal of byte-mode
   drop from RED with little, if any, incremental deployment impact.
   shouldn't add it back again.

   If a vendor has most queues implemented AQM with byte-mode drop, and an operator has
   turned it on, it is strongly RECOMMENDED that it SHOULD be turned
   off.  Note that RED as a whole SHOULD NOT be turned off, as without
   it, the resulting
   network would amplify the potency of a drop tail small packet DDoS attack.  At
   the first queue also biases against large packets.  But note
   also that turning off byte-mode may alter the relative performance stream of
   applications using different packet sizes, packets would push aside a greater
   proportion of large packets, so it more of the small packets would be advisable
   survive to establish the implications before turning it off.

   Instead, attack the IETF transport area should continue its programme next queue.  Thus a flood of
   updating congestion control protocols to take account small packets
   would continue on towards the destination, pushing regular traffic
   with large packets out of packet size
   and to make transports the way in one queue after the next, but
   suffering much less sensitive drop itself.

   Appendix C explains why the ability of networks to losing control packets like
   SYNs and pure ACKS.

   NOTE WELL that RED's byte-mode queue measurement is fine, being
   completely orthogonal police the
   response of _any_ transport to congestion depends on bit-congestible
   network resources only doing packet-mode not byte-mode drop.  If a RED implementation has
   a byte-mode but does not specify what sort of byte-mode,  In
   summary, it is most
   probably byte-mode queue measurement, which is fine.  However, if in
   doubt, the vendor should be consulted.

   The above conclusions cater for says that making drop probability depend on the Internet as it is today with
   most, if not all, resources being primarily bit-congestible.  A
   secondary conclusion size of this memo is
   the packets that we may see more packet-
   congestible resources in bits happen to be divided into simply encourages the future, so research may
   bits to be needed divided into smaller packets.  Byte-mode drop would
   therefore irreversibly complicate any attempt to
   extend fix the Internet's congestion notification (drop or ECN) so that
   it can handle a mix of bit-congestible and packet-congestible
   resources.

10.
   incentive structures.

7.  Acknowledgements

   Thank you to Sally Floyd, who gave extensive and useful review
   comments.  Also thanks for the reviews from Philip Eardley, Toby
   Moncaster and Arnaud Jacquet as well as helpful explanations of
   different hardware approaches from Larry Dunn and Fred Baker.  I am
   grateful to Bruce Davie and his colleagues for providing a timely and
   efficient survey of RED implementation in Cisco's product range.
   Also grateful thanks to Toby Moncaster, Will Dormann, John Regnault,
   Simon Carter and Stefaan De Cnodder who further helped survey the
   current status of RED implementation and deployment and, finally,
   thanks to the anonymous individuals who responded.

   Bob Briscoe is and Jukka Manner are partly funded by Trilogy, a research
   project (ICT- 216372) supported by the European Community under its
   Seventh Framework Programme.  The views expressed here are those of
   the
   author authors only.

11.

8.  Comments Solicited

   Comments and questions are encouraged and very welcome.  They can be
   addressed to the IETF Transport Area working group mailing list
   <tsvwg@ietf.org>, and/or to the authors.

12.

9.  References

12.1.

9.1.  Normative References

   [RFC2119]                       Bradner, S., "Key words for use in
                                   RFCs to Indicate Requirement Levels",
                                   BCP 14, RFC 2119, March 1997.

   [RFC2309]                       Braden, B., Clark, D., Crowcroft, J.,
                                   Davie, B., Deering, S., Estrin, D.,
                                   Floyd, S., Jacobson, V., Minshall,
                                   G., Partridge, C., Peterson, L.,
                                   Ramakrishnan, K., Shenker, S.,
                                   Wroclawski, J., and L. Zhang,
                                   "Recommendations on Queue Management
                                   and Congestion Avoidance in the
                                   Internet", RFC 2309, April 1998.

   [RFC3168]                       Ramakrishnan, K., Floyd, S., and D.
                                   Black, "The Addition of Explicit
                                   Congestion Notification (ECN) to IP",
                                   RFC 3168, September 2001.

   [RFC3426]                       Floyd, S., "General Architectural and
                                   Policy Considerations", RFC 3426,
                                   November 2002.

   [RFC5033]                       Floyd, S. and M. Allman, "Specifying
                                   New Congestion Control Algorithms",
                                   BCP 133, RFC 5033, August 2007.

12.2.

9.2.  Informative References

   [CCvarPktSize]                  Widmer, J., Boutremans, C., and J-Y.
                                   Le Boudec, "Congestion Control for
                                   Flows with Variable Packet Size", ACM
                                   CCR 34(2) 137--151, 2004,
              <http://doi.acm.org/10.1145/997150.997162>. <http://
                                   doi.acm.org/10.1145/997150.997162>.

   [DRQ]                           Shin, M., Chong, S., and I. Rhee,
                                   "Dual-Resource TCP/AQM for
                                   Processing-Constrained Networks",
                                   IEEE/ACM Transactions on
                                   Networking Vol 16, issue 2,
                                   April 2008,
              <http://dx.doi.org/10.1109/TNET.2007.900415>. <http://dx.doi.org/
                                   10.1109/TNET.2007.900415>.

   [DupTCP]                        Wischik, D., "Short messages", Royal
                                   Society workshop on networks:
                                   modelling and control ,
                                   September 2007, <http://
              www.cs.ucl.ac.uk/staff/ucacdjw/Research/shortmsg.html>.
                                   www.cs.ucl.ac.uk/staff/ucacdjw/
                                   Research/shortmsg.html>.

   [ECNFixedWireless]              Siris, V., "Resource Control for
                                   Elastic Traffic in CDMA Networks",
                                   Proc. ACM MOBICOM'02 ,
                                   September 2002, <http://
                                   www.ics.forth.gr/netlab/publications/
                                   resource_control_elastic_cdma.html>.

   [Evol_cc]                       Gibbens, R. and F. Kelly, "Resource
                                   pricing and the evolution of
                                   congestion control",
                                   Automatica 35(12)1969--
              1985, 35(12)1969--1985,
                                   December 1999,
              <http://www.statslab.cam.ac.uk/~frank/evol.html>. <http://
                                   www.statslab.cam.ac.uk/~frank/
                                   evol.html>.

   [I-D.briscoe-tsvwg-re-ecn-tcp]  Briscoe, B., Jacquet, A., Moncaster,
                                   T., and A. Smith, "Re-ECN: Adding
                                   Accountability for Causing Congestion
                                   to TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-07 (work in
              progress), March 2009.

   [I-D.floyd-tcpm-ackcc]
              Floyd, S., "Adding Acknowledgement Congestion Control to
              TCP", draft-floyd-tcpm-ackcc-06
                                   draft-briscoe-tsvwg-re-ecn-tcp-08
                                   (work in progress),
              July September 2009.

   [I-D.ietf-pcn-marking-behaviour]

   [I-D.ietf-pcn]                  Eardley, P., "Metering and marking
                                   behaviour of PCN-
              nodes", PCN-nodes",
                                   draft-ietf-pcn-marking-behaviour-05
                                   (work in progress), August 2009.

   [I-D.ietf-tcpm-ecnsyn]
              Floyd, S., "Adding Explicit Congestion Notification (ECN)
              Capability to TCP's SYN/ACK  Packets",
              draft-ietf-tcpm-ecnsyn-10 (work in progress), May 2009.

   [I-D.irtf-iccrg-welzl-congestion-control-open-research]

   [I-D.irtf-iccrg-welzl]          Welzl, M., Scharf, M., Briscoe, B.,
                                   and D. Papadimitriou, "Open Research
                                   Issues in Internet Congestion
                                   Control",
              draft-irtf-iccrg-welzl-congestion-control-open-research-05 draft-irtf-iccrg-welzl-
                                   congestion-control-open-research-07
                                   (work in progress), September 2009. June 2010.

   [IOSArch]                       Bollapragada, V., White, R., and C.
                                   Murphy, "Inside Cisco IOS Software
                                   Architecture", Cisco Press: CCIE
                                   Professional Development ISBN13: 978-1-57870-181-0, 978-
                                   1-57870-181-0, July 2000.

   [MulTCP]                        Crowcroft, J. and Ph. Oechslin,
                                   "Differentiated End to End Internet
                                   Services using a Weighted
                                   Proportional Fair Sharing TCP",
                                   CCR 28(3) 53--69, July 1998, <http://
              www.cs.ucl.ac.uk/staff/J.Crowcroft/hipparch/pricing.html>.
                                   www.cs.ucl.ac.uk/staff/J.Crowcroft/
                                   hipparch/pricing.html>.

   [PktSizeEquCC]                  Vasallo, P., "Variable Packet Size
                                   Equation-Based Congestion Control",
                                   ICSI Technical Report tr-00-008,
                                   2000, <http://http.icsi.berkeley.edu/ftp/global/pub/
              techreports/2000/tr-00-008.pdf>. <http://http.icsi.berkeley.edu/
                                   ftp/global/pub/techreports/2000/
                                   tr-00-008.pdf>.

   [RED93]                         Floyd, S. and V. Jacobson, "Random
                                   Early Detection (RED) gateways for
                                   Congestion Avoidance", IEEE/ACM
                                   Transactions on Networking 1(4) 397--413, 397--
                                   413, August 1993,
              <http://www.icir.org/floyd/papers/red/red.html>. <http://
                                   www.icir.org/floyd/papers/red/
                                   red.html>.

   [REDbias]                       Eddy, W. and M. Allman, "A Comparison
                                   of RED's Byte and Packet Modes",
                                   Computer Networks 42(3) 261--280,
                                   June 2003,
              <http://www.ir.bbn.com/documents/articles/redbias.ps>. <http://www.ir.bbn.com/
                                   documents/articles/redbias.ps>.

   [REDbyte]                       De Cnodder, S., Elloumi, O., and K.
                                   Pauwels, "RED behavior with different
                                   packet sizes", Proc. 5th IEEE
                                   Symposium on Computers and
                                   Communications (ISCC) 793--799,
                                   July 2000,
              <http://www.icir.org/floyd/red/Elloumi99.pdf>. <http://www.icir.org/
                                   floyd/red/Elloumi99.pdf>.

   [RFC2474]                       Nichols, K., Blake, S., Baker, F.,
                                   and D. Black, "Definition of the
                                   Differentiated Services Field (DS
                                   Field) in the IPv4 and IPv6 Headers",
                                   RFC 2474, December 1998.

   [RFC3448]                       Handley, M., Floyd, S., Padhye, J.,
                                   and J. Widmer, "TCP Friendly Rate
                                   Control (TFRC): Protocol
                                   Specification", RFC 3448,
                                   January 2003.

   [RFC3714]                       Floyd, S. and J. Kempf, "IAB Concerns
                                   Regarding Congestion Control for
                                   Voice Traffic in the Internet",
                                   RFC 3714, March 2004.

   [RFC4782]                       Floyd, S., Allman, M., Jain, A., and
                                   P. Sarolahti, "Quick-
              Start "Quick-Start for TCP
                                   and IP", RFC 4782, January 2007.

   [RFC4828]                       Floyd, S. and E. Kohler, "TCP
                                   Friendly Rate Control (TFRC): The
                                   Small-Packet (SP) Variant", RFC 4828,
                                   April 2007.

   [RFC5562]                       Kuzmanovic, A., Mondal, A., Floyd,
                                   S., and K. Ramakrishnan, "Adding
                                   Explicit Congestion Notification
                                   (ECN) Capability to TCP's SYN/ACK
                                   Packets", RFC 5562, June 2009.

   [RFC5670]                       Eardley, P., "Metering and Marking
                                   Behaviour of PCN-Nodes", RFC 5670,
                                   November 2009.

   [RFC5681]                       Allman, M., Paxson, V., and E.
                                   Blanton, "TCP Congestion Control",
                                   RFC 5681, September 2009.

   [RFC5690]                       Floyd, S., Arcia, A., Ros, D., and J.
                                   Iyengar, "Adding Acknowledgement
                                   Congestion Control to TCP", RFC 5690,
                                   February 2010.

   [Rate_fair_Dis]                 Briscoe, B., "Flow Rate Fairness:
                                   Dismantling a Religion",
              ACM CCR 37(2)63--74, April 2007,
              <http://portal.acm.org/citation.cfm?id=1232926>.

   [WindowPropFair]
              Siris, V., "Service Differentiation Religion", ACM
                                   CCR 37(2)63--74, April 2007, <http://
                                   portal.acm.org/
                                   citation.cfm?id=1232926>.

   [WindowPropFair]                Siris, V., "Service Differentiation
                                   and Performance of Weighted Window-
                                   Based Congestion Control and Packet
                                   Marking Algorithms in ECN Networks",
                                   Computer Communications 26(4) 314--
                                   326, 2002, <http://www.ics.forth.gr/
                                   netgroup/publications/
                                   weighted_window_control.html>.

   [gentle_RED]                    Floyd, S., "Recommendation on using
                                   the "gentle_" variant of RED", Web
                                   page , March 2000, <http://
                                   www.icir.org/floyd/red/gentle.html>.

   [pBox]                          Floyd, S. and K. Fall, "Promoting the
                                   Use of End-to-End Congestion Control
                                   in the Internet", IEEE/ACM
                                   Transactions on Networking 7(4) 458--
                                   472, August 1999, <http://
                                   www.aciri.org/floyd/
                                   end2end-paper.html>.

   [pktByteEmail]                  Yes and J. Doe, "Missing for now",
                                   RFC 0000, May 2006.

   [xcp-spec]                      Falk, A., "Specification for the
                                   Explicit Control Protocol (XCP)",
                                   draft-falk-xcp-spec-03 (work in
                                   progress), July 2007.

Appendix A.  Congestion Notification Definition: Further Justification

   In Section 1.1 on the definition of congestion notification, load not
   capacity was used as the denominator.  This also has a subtle
   significance in the related debate over the design of new transport
   protocols--typical new protocol designs (e.g. in XCP [xcp-spec] &
   Quickstart [RFC4782]) expect the sending transport to communicate its
   desired flow rate to the network and network elements to
   progressively subtract from this so that the achievable flow rate
   emerges at the receiving transport.

   Congestion notification with total load in the denominator can serve
   a similar purpose (though in retrospect not in advance like XCP &
   QuickStart).  Congestion notification is a dimensionless fraction but
   each source can extract necessary rate information from it because it
   already knows what its own rate is.  Even though congestion
   notification doesn't communicate a rate explicitly, from each
   source's point of view congestion notification represents the
   fraction of the rate it was sending a round trip ago that couldn't
   (or wouldn't) be served by available resources.

Appendix B.  Idealised Wire Protocol

   We will start by inventing an idealised congestion notification
   protocol before discussing how to make it practical.  The idealised
   protocol is shown to be correct using examples later in this
   appendix.

B.1.  Protocol Coding

   Congestion notification involves the congested resource coding a
   congestion notification signal into the packet stream and the
   transports decoding it.  The idealised protocol uses two different
   (imaginary) fields in each datagram to signal congestion: one for
   byte congestion and one for packet congestion.

   We are not saying two ECN fields will be needed (and we are not
   saying that somehow a resource should be able to drop a packet in one
   of two different ways so that the transport can distinguish which
   sort of drop it was!).  These two congestion notification channels
   are just a conceptual device.  They allow us to defer having to
   decide whether to distinguish between byte and packet congestion when
   the network resource codes the signal or when the transport decodes
   it.

   However, although this idealised mechanism isn't intended for
   implementation, we do want to emphasise that we may need to find a
   way to implement it, because it could become necessary to somehow
   distinguish between bit and packet congestion [RFC3714].  Currently,
   packet-congestion is not the common case, but there is no guarantee
   that it will not become common with future technology trends.

   The idealised wire protocol is given below.  It accounts for packet
   sizes at the transport layer, not in the network, and then only in
   the case of bit-congestible resources.  This avoids the perverse
   incentive to send smaller packets and the DoS vulnerability that
   would otherwise result if the network were to bias towards them (see
   the motivating argument about avoiding perverse incentives in
   Section 2.2):

   1.  A packet-congestible resource trying to code congestion level p_p
       into a packet stream should mark the idealised `packet
       congestion' field in each packet with probability p_p
       irrespective of the packet's size.  The transport should then
       take a packet with the packet congestion field marked to mean
       just one mark, irrespective of the packet size.

   2.  A bit-congestible resource trying to code time-varying byte-
       congestion level p_b into a packet stream should mark the `byte
       congestion' field in each packet with probability p_b, again
       irrespective of the packet's size.  Unlike before, the transport
       should take a packet with the byte congestion field marked to
       count as a mark on each byte in the packet.

   The worked examples in Appendix B.2 show that transports can extract
   sufficient and correct congestion notification from these protocols
   for cases when two flows with different packet sizes have matching
   bit rates or matching packet rates.  Examples are also given that mix
   these two flows into one to show that a flow with mixed packet sizes
   would still be able to extract sufficient and Performance of
              Weighted Window-Based Congestion Control correct information.

   Sufficient and Packet
              Marking Algorithms in ECN Networks", Computer
              Communications 26(4) 314--326, 2002, <http://
              www.ics.forth.gr/netgroup/publications/
              weighted_window_control.html>.

   [gentle_RED]
              Floyd, S., "Recommendation on using correct congestion information means that there is
   sufficient information for the "gentle_" variant two different types of RED", Web page , March 2000,
              <http://www.icir.org/floyd/red/gentle.html>.

   [pBox]     Floyd, S. and K. Fall, "Promoting transport
   requirements:

   Ratio-based:  Established transport congestion controls like TCP's
      [RFC5681] aim to achieve equal segment rates per RTT through the Use of End-to-End
              Congestion Control in
      same bottleneck--TCP friendliness [RFC3448].  They work with the Internet", IEEE/ACM Transactions
              on Networking 7(4) 458--472, August 1999,
              <http://www.aciri.org/floyd/end2end-paper.html>.

   [pktByteEmail]
              Floyd, S., "RED: Discussions
      ratio of Byte and Packet Modes",
              email , March 1997,
              <http://www-nrg.ee.lbl.gov/floyd/REDaveraging.txt>.

   [xcp-spec]
              Falk, A., "Specification for the Explicit Control Protocol
              (XCP)", draft-falk-xcp-spec-03 (work dropped to delivered segments (or marked to unmarked
      segments in progress),
              July 2007.

              (Expired)

Editorial Comments

   [Note_Variation]  The algorithm of the byte-mode drop variant case of RED
                     switches off any bias towards small packets
                     whenever the smoothed queue length dictates ECN).  The example scenarios show that
      these ratio-based transports are effectively the drop probability of large packets should be
                     100%. In the example same whether
      counting in bytes or packets, because the Introduction, as the
                     large units cancel out.
      (Incidentally, this is why TCP's bit rate is still proportional to
      packet drop probability varies around 25% size even when byte-counting is used, as recommended for
      TCP in [RFC5681], mainly for orthogonal security reasons.)

   Absolute-target-based:  Other congestion controls proposed in the
                     small packet drop probability will vary around 1%,
                     but with occasional jumps
      research community aim to 100% whenever limit the
                     instantaneous queue (after drop) manages volume of congestion caused to sustain
      a length above the 100% drop point constant weight parameter.  [MulTCP][WindowPropFair] are
      examples of weighted proportionally fair transports designed for longer than
      cost-fair environments [Rate_fair_Dis].  In this case, the queue averaging period.

Appendix A.
      transport requires a count (not a ratio) of dropped/marked bytes
      in the bit-congestible case and of dropped/marked packets in the
      packet congestible case.

B.2.  Example Scenarios

A.1.

B.2.1.  Notation

   To prove our idealised wire protocol (Section 5) (Appendix B.1) is correct, we
   will compare two flows with different packet sizes, s_1 and s_2 [bit/pkt], [bit/
   pkt], to make sure their transports each see the correct congestion
   notification.  Initially, within each flow we will take all packets
   as having equal sizes, but later we will generalise to flows within
   which packet sizes vary.  A flow's bit rate, x [bit/s], is related to
   its packet rate, u [pkt/s], by

      x(t) = s.u(t).

   We will consider a 2x2 matrix of four scenarios:

   +-----------------------------+------------------+------------------+
   |           resource type and |   A) Equal bit   |   B) Equal pkt   |
   |            congestion level |       rates      |       rates      |
   +-----------------------------+------------------+------------------+
   |     i) bit-congestible, p_b |       (Ai)       |       (Bi)       |
   |    ii) pkt-congestible, p_p |       (Aii)      |       (Bii)      |
   +-----------------------------+------------------+------------------+

                                  Table 3

A.2.

B.2.2.  Bit-congestible resource, equal bit rates (Ai)

   Starting with the bit-congestible scenario, for two flows to maintain
   equal bit rates (Ai) the ratio of the packet rates must be the
   inverse of the ratio of packet sizes: u_2/u_1 = s_1/s_2.  So, for
   instance, a flow of 60B packets would have to send 25x more packets
   to achieve the same bit rate as a flow of 1500B packets.  If a
   congested resource marks proportion p_b of packets irrespective of
   size, the ratio of marked packets received by each transport will
   still be the same as the ratio of their packet rates, p_b.u_2/p_b.u_1
   = s_1/s_2.  So of the 25x more 60B packets sent, 25x more will be
   marked than in the 1500B packet flow, but 25x more won't be marked
   too.

   In this scenario, the resource is bit-congestible, so it always uses
   our idealised bit-congestion field when it marks packets.  Therefore
   the transport should count marked bytes not packets.  But it doesn't
   actually matter for ratio-based transports like TCP (Section 5). (Appendix B.1).
   The ratio of marked to unmarked bytes seen by each flow will be p_b,
   as will the ratio of marked to unmarked packets.  Because they are
   ratios, the units cancel out.

   If a flow sent an inconsistent mixture of packet sizes, we have said
   it should count the ratio of marked and unmarked bytes not packets in
   order to correctly decode the level of congestion.  But actually, if
   all it is trying to do is decode p_b, it still doesn't matter.  For
   instance, imagine the two equal bit rate flows were actually one flow
   at twice the bit rate sending a mixture of one 1500B packet for every
   thirty 60B packets. 25x more small packets will be marked and 25x
   more will be unmarked.  The transport can still calculate p_b whether
   it uses bytes or packets for the ratio.  In general, for any
   algorithm which works on a ratio of marks to non-marks, either bytes
   or packets can be counted interchangeably, because the choice cancels
   out in the ratio calculation.

   However, where an absolute target rather than relative volume of
   congestion caused is important (Section 5), (Appendix B.1), as it is for
   congestion accountability [Rate_fair_Dis], the transport must count
   marked bytes not packets, in this bit-congestible case.  Aside from
   the goal of congestion accountability, this is how the bit rate of a
   transport can be made independent of packet size; by ensuring the
   rate of congestion caused is kept to a constant weight
   [WindowPropFair], rather than merely responding to the ratio of
   marked and unmarked bytes.

   Note the unit of byte-congestion-volume is the byte.

A.3.

B.2.3.  Bit-congestible resource, equal packet rates (Bi)

   If two flows send different packet sizes but at the same packet rate,
   their bit rates will be in the same ratio as their packet sizes, x_2/
   x_1 = s_2/s_1.  For instance, a flow sending 1500B packets at the
   same packet rate as another sending 60B packets will be sending at
   25x greater bit rate.  In this case, if a congested resource marks
   proportion p_b of packets irrespective of size, the ratio of packets
   received with the byte-congestion field marked by each transport will
   be the same, p_b.u_2/p_b.u_1 = 1.

   Because the byte-congestion field is marked, the transport should
   count marked bytes not packets.  But because each flow sends
   consistently sized packets it still doesn't matter for ratio-based
   transports.  The ratio of marked to unmarked bytes seen by each flow
   will be p_b, as will the ratio of marked to unmarked packets.
   Therefore, if the congestion control algorithm is only concerned with
   the ratio of marked to unmarked packets (as is TCP), both flows will
   be able to decode p_b correctly whether they count packets or bytes.

   But if the absolute volume of congestion is important, e.g. for
   congestion accountability, the transport must count marked bytes not
   packets.  Then the lower bit rate flow using smaller packets will
   rightly be perceived as causing less byte-congestion even though its
   packet rate is the same.

   If the two flows are mixed into one, of bit rate x1+x2, with equal
   packet rates of each size packet, the ratio p_b will still be
   measurable by counting the ratio of marked to unmarked bytes (or
   packets because the ratio cancels out the units).  However, if the
   absolute volume of congestion is required, the transport must count
   the sum of congestion marked bytes, which indeed gives a correct
   measure of the rate of byte-congestion p_b(x_1 + x_2) caused by the
   combined bit rate.

A.4.

B.2.4.  Pkt-congestible resource, equal bit rates (Aii)

   Moving to the case of packet-congestible resources, we now take two
   flows that send different packet sizes at the same bit rate, but this
   time the pkt-congestion field is marked by the resource with
   probability p_p.  As in scenario Ai with the same bit rates but a
   bit-congestible resource, the flow with smaller packets will have a
   higher packet rate, so more packets will be both marked and unmarked,
   but in the same proportion.

   This time, the transport should only count marks without taking into
   account packet sizes.  Transports will get the same result, p_p, by
   decoding the ratio of marked to unmarked packets in either flow.

   If one flow imitates the two flows but merged together, the bit rate
   will double with more small packets than large.  The ratio of marked
   to unmarked packets will still be p_p.  But if the absolute number of
   pkt-congestion marked packets is counted it will accumulate at the
   combined packet rate times the marking probability, p_p(u_1+u_2), 26x
   faster than packet congestion accumulates in the single 1500B packet
   flow of our example, as required.

   But if the transport is interested in the absolute number of packet
   congestion, it should just count how many marked packets arrive.  For
   instance, a flow sending 60B packets will see 25x more marked packets
   than one sending 1500B packets at the same bit rate, because it is
   sending more packets through a packet-congestible resource.

   Note the unit of packet congestion is a packet.

A.5.

B.2.5.  Pkt-congestible resource, equal packet rates (Bii)

   Finally, if two flows with the same packet rate, pass through a
   packet-congestible resource, they will both suffer the same
   proportion of marking, p_p, irrespective of their packet sizes.  On
   detecting that the pkt-congestion field the pkt-congestion field is marked, the transport
   should count packets, and it will be able to extract the ratio p_p of
   marked to unmarked packets from both flows, irrespective of packet
   sizes.

   Even if the transport is monitoring the absolute amount of packets
   congestion over a period, still it will see the same amount of packet
   congestion from either flow.

   And if the two equal packet rates of different size packets are mixed
   together in one flow, the packet rate will double, so the absolute
   volume of packet-congestion will accumulate at twice the rate of
   either flow, 2p_p.u_1 = p_p(u_1+u_2).

Appendix C.  Byte-mode Drop Complicates Policing Congestion Response

   This appendix explains why the ability of networks to police the
   response of _any_ transport to congestion depends on bit-congestible
   network resources only doing packet-mode not byte-mode drop.

   To be able to police a transport's response to congestion when
   fairness can only be judged over time and over all an individual's
   flows, the policer has to have an integrated view of all the
   congestion an individual (not just one flow) has caused due to all
   traffic entering the Internet from that individual.  This is marked, the transport
   should count packets, and it will be able termed
   congestion accountability.

   But a byte-mode drop algorithm has to extract depend on the ratio p_p local MTU of
   marked the
   line - an algorithm needs to unmarked packets from both flows, irrespective use some concept of a 'normal' packet
   size.  Therefore, one dropped or marked packet
   sizes.

   Even if the transport is monitoring not necessarily
   equivalent to another unless you know the absolute amount MTU at the queue where it
   was dropped/marked.  To have an integrated view of packets
   congestion over a period, still user, we believe
   congestion policing has to be located at an individual's attachment
   point to the Internet [I-D.briscoe-tsvwg-re-ecn-tcp].  But from there
   it will see cannot know the same amount MTU of packet each remote queue that caused each drop/
   mark.  Therefore it cannot take an integrated approach to policing
   all the responses to congestion from either flow.

   And if of all the two equal packet rates transports of different size packets are mixed
   together in one flow, the packet rate will double, so
   individual.  Therefore it cannot police anything.

   The security/incentive argument _for_ packet-mode drop is similar.
   Firstly, confining RED to packet-mode drop would not preclude
   bottleneck policing approaches such as [pBox] as it seems likely they
   could work just as well by monitoring the absolute volume of packet-congestion will accumulate at twice dropped bytes
   rather than packets.  Secondly packet-mode dropping/marking naturally
   allows the rate congestion notification of
   either flow, 2p_p.u_1 = p_p(u_1+u_2).

Appendix B.  Congestion Notification Definition: Further Justification

   In Section 3 packets to be globally
   meaningful without relying on MTU information held elsewhere.

   Because we recommend that a dropped/marked packet should be taken to
   mean that all the definition of congestion notification, load not
   capacity was used as bytes in the denominator.  This also has packet are dropped/marked, a subtle
   significance policer
   can remain robust against bits being re-divided into different size
   packets or across different size flows [Rate_fair_Dis].  Therefore
   policing would work naturally with just simple packet-mode drop in
   RED.

   In summary, making drop probability depend on the related debate over the design size of new transport
   protocols--typical new protocol designs (e.g. in XCP [xcp-spec] &
   Quickstart [RFC4782]) expect the sending transport to communicate its
   desired flow rate packets
   that bits happen to be divided into simply encourages the network and network elements bits to
   progressively subtract be
   divided into smaller packets.  Byte-mode drop would therefore
   irreversibly complicate any attempt to fix the Internet's incentive
   structures.

Appendix D.  Changes from this so that Previous Versions

   To be removed by the achievable flow rate
   emerges RFC Editor on publication.

   Full incremental diffs between each version are available at
   <http://www.cs.ucl.ac.uk/staff/B.Briscoe/pubs.html#byte-pkt-congest>
   or
   <http://tools.ietf.org/wg/tsvwg/draft-ietf-tsvwg-byte-pkt-congest/>
   (courtesy of the receiving transport.

   Congestion notification with total load in rfcdiff tool):

   From -01 to -02 (this version):

      *  Restructured the denominator can serve
   a similar purpose (though in retrospect not whole document for (hopefully) easier reading
         and clarity.  The concrete recommendation, in advance like XCP &
   QuickStart).  Congestion notification RFC2119 language,
         is a dimensionless fraction but
   each source can extract necessary rate information from it because it
   already knows what its own rate is.  Even though congestion
   notification doesn't communicate a rate explicitly, from each
   source's point of view congestion notification represents now in Section 5.

   From -00 to -01:

      *  Minor clarifications throughout and updated references

   From briscoe-byte-pkt-mark-02 to ietf-byte-pkt-congest-00:

      *  Added note on relationship to existing RFCs
      *  Posed the
   fraction question of the rate whether packet-congestion could become
         common and deferred it was sending a round trip ago that couldn't
   (or wouldn't) be served by available resources.  After they were
   sent, all these fractions of each source's offered load added up to the aggregate fraction of offered load seen by the congested
   resource.  So, the source can also know the total excess rate by
   multiplying total load by congestion level.  Therefore congestion
   notification, as one scale-free dimensionless fraction, implicitly
   communicates IRTF ICCRG.  Added ref to the instantaneous excess flow rate, albeit a RTT ago.

Appendix C.  Byte-mode Drop Complicates Policing Congestion Response

   This appendix explains why
         dual-resource queue (DRQ) proposal.

      *  Changed PCN references from the ability of networks PCN charter & architecture to police
         the
   response of _any_ transport PCN marking behaviour draft most likely to congestion depends on bit-congestible
   network resources only doing packet-mode not byte-mode drop.

   To be able imminently
         become the standards track WG item.

   From -01 to police a transport's response -02:

      *  Abstract reorganised to congestion when
   fairness can only be judged over time and over all an individual's
   flows, align with clearer separation of issue
         in the policer has memo.

      *  Introduction reorganised with motivating arguments removed to have an integrated view
         new Section 2.

      *  Clarified avoiding lock-out of all large packets is not the
   congestion an individual (not just one flow) has caused due main or
         only motivation for RED.

      *  Mentioned choice of drop or marking explicitly throughout,
         rather than trying to all
   traffic entering the Internet from that individual.  This is termed
   congestion accountability.

   But coin a byte-mode drop algorithm has word to depend on the local MTU of mean either.

      *  Generalised the
   line - an algorithm needs discussion throughout to use some concept of a 'normal' packet
   size.  Therefore, one dropped or marked any packet is forwarding
         function on any network equipment, not necessarily
   equivalent to another unless you know the MTU at just routers.

      *  Clarified the queue that where
   it was dropped/marked.  To have an integrated view of last point about why this is a user, we
   believe congestion policing has good time to sort
         out this issue: because it will be located at an individual's
   attachment point hard / impossible to design
         new transports unless we decide whether the Internet [I-D.briscoe-tsvwg-re-ecn-tcp].  But
   from there it cannot know network or the MTU
         transport is allowing for packet size.

      *  Added statement explaining the horizon of each remote queue that caused
   each drop/mark.  Therefore it cannot take an integrated approach to
   policing all the responses to memo is long
         term, but with short term expediency in mind.

      *  Added material on scaling congestion control with packet size
         (Section 2.1).

      *  Separated out issue of all the transports normalising TCP's bit rate from issue of one
   individual.  Therefore it cannot police anything.

   The security/incentive argument _for_ packet-mode
         preference to control packets (Section 2.3).

      *  Divided up Congestion Measurement section for clarity,
         including new material on fixed size packet buffers and buffer
         carving (Section 3.1.1 & Section 3.2.1) and on congestion
         measurement in wireless link technologies without queues
         (Section 3.1.2).

      *  Added section on 'Making Transports Robust against Control
         Packet Losses' (Section 3.2.3) with existing & new material
         included.

      *  Added tabulated results of vendor survey on byte-mode drop is similar.
   Firstly, confining
         variant of RED (Table 2).

   From -00 to -01:

      *  Clarified applicability to packet-mode drop would not preclude
   bottleneck policing approaches such as [pBox] as it seems likely they
   could work just as well by monitoring the volume of dropped bytes
   rather than packets.  Secondly packet-mode dropping/marking naturally
   allows the congestion notification of packets to be globally
   meaningful without relying on MTU information held elsewhere.

   Because we recommend as ECN.

      *  Highlighted DoS vulnerability.

      *  Emphasised that a dropped/marked packet drop-tail suffers from similar problems to
         byte-mode drop, so only byte-mode drop should be taken to
   mean that all the bytes in turned off,
         not RED itself.

      *  Clarified the packet are dropped/marked, a policer
   can remain robust against bits being re-divided into different size
   packets or across different size flows [Rate_fair_Dis].  Therefore
   policing would work naturally with just simple packet-mode drop in
   RED.

   In summary, making original apparent motivations for recommending
         byte-mode drop probability depend on included protecting SYNs and pure ACKs more than
         equalising the size bit rates of the packets
   that bits happen TCPs with different segment sizes.
         Removed some conjectured motivations.

      *  Added support for updates to be divided TCP in progress (ackcc & ecn-syn-
         ack).

      *  Updated survey results with newly arrived data.

      *  Pulled all recommendations together into simply encourages the bits to be
   divided conclusions.

      *  Moved some detailed points into smaller packets.  Byte-mode drop would therefore
   irreversibly complicate any attempt to fix the Internet's incentive
   structures.

Author's Address two additional appendices and a
         note.

      *  Considerable clarifications throughout.

      *  Updated references

Authors' Addresses

   Bob Briscoe
   BT
   B54/77, Adastral Park
   Martlesham Heath
   Ipswich  IP5 3RE
   UK

   Phone: +44 1473 645196
   Email:
   EMail: bob.briscoe@bt.com
   URI:   http://bobbriscoe.net/
   Jukka Manner
   Aalto University
   Department of Communications and Networking (Comnet)
   P.O. Box 13000
   FIN-00076 Aalto
   Finland

   Phone: +358 9 470 22481
   EMail: jukka.manner@tkk.fi
   URI:   http://www.netlab.tkk.fi/~jmanner/