draft-ietf-mboned-intro-multicast-02.txt   draft-ietf-mboned-intro-multicast-03.txt 
INTERNET-DRAFT T. Maufer
Expire in six months C. Semeria Network Working Group T. Maufer
Internet-Draft C. Semeria
Category: Informational 3Com Corporation Category: Informational 3Com Corporation
March 1997 Expire in six months July 1997
Introduction to IP Multicast Routing Introduction to IP Multicast Routing
<draft-ietf-mboned-intro-multicast-03.txt>
<draft-ietf-mboned-intro-multicast-02.txt>
Status of this Memo Status of this Memo
This document is an Internet Draft. Internet Drafts are working This document is an Internet Draft. Internet Drafts are working
documents of the Internet Engineering Task Force (IETF), its Areas, and documents of the Internet Engineering Task Force (IETF), its Areas, and
its Working Groups. Note that other groups may also distribute working its Working Groups. Note that other groups may also distribute working
documents as Internet Drafts. documents as Internet Drafts.
Internet Drafts are draft documents valid for a maximum of six months. Internet Drafts are draft documents valid for a maximum of six months.
Internet Drafts may be updated, replaced, or obsoleted by other Internet Drafts may be updated, replaced, or obsoleted by other
skipping to change at page 1, line 34 skipping to change at page 1, line 34
To learn the current status of any Internet-Draft, please check the To learn the current status of any Internet-Draft, please check the
"1id-abstracts.txt" listing contained in the internet-drafts Shadow "1id-abstracts.txt" listing contained in the internet-drafts Shadow
Directories on: Directories on:
ftp.is.co.za (Africa) ftp.is.co.za (Africa)
nic.nordu.net (Europe) nic.nordu.net (Europe)
ds.internic.net (US East Coast) ds.internic.net (US East Coast)
ftp.isi.edu (US West Coast) ftp.isi.edu (US West Coast)
munnari.oz.au (Pacific Rim) munnari.oz.au (Pacific Rim)
FOREWORD
This document is introductory in nature. We have not attempted to
describe every detail of each protocol, rather to give a concise
overview in all cases, with enough specifics to allow a reader to grasp
the essential details and operation of protocols related to multicast
IP. Every effort has been made to ensure the accurate representation of
any cited works, especially any works-in-progress. For the complete
details, we refer you to the relevant specification(s).
If internet-drafts are cited in this document, it is only because they
are the only sources of certain technical information at the time of
this writing. We expect that many of the internet-drafts which we have
cited will eventually become RFCs. See the shadow directories above for
the status of any of these drafts, their follow-on drafts, or possibly
the resulting RFCs.
ABSTRACT ABSTRACT
The first part of this paper describes the benefits of multicasting, The first part of this paper describes the benefits of multicasting,
the MBone, Class D addressing, and the operation of the Internet Group the MBone, Class D addressing, and the operation of the Internet Group
Management Protocol (IGMP). The second section explores a number of Management Protocol (IGMP). The second section explores a number of
different techniques that may potentially be employed by multicast different techniques that may potentially be employed by multicast
routing protocols: routing protocols:
o Flooding o Flooding
o Spanning Trees o Spanning Trees
o Reverse Path Broadcasting (RPB) o Reverse Path Broadcasting (RPB)
o Truncated Reverse Path Broadcasting (TRPB) o Truncated Reverse Path Broadcasting (TRPB)
o Reverse Path Multicasting (RPM) o Reverse Path Multicasting (RPM)
o "Shared-Tree" Techniques o ''Shared-Tree'' Techniques
The third part contains the main body of the paper. It describes how The third part contains the main body of the paper. It describes how
the previous techniques are implemented in multicast routing protocols the previous techniques are implemented in multicast routing protocols
available today (or under development). available today (or under development).
o Distance Vector Multicast Routing Protocol (DVMRP) o Distance Vector Multicast Routing Protocol (DVMRP)
o Multicast Extensions to OSPF (MOSPF) o Multicast Extensions to OSPF (MOSPF)
o Protocol-Independent Multicast - Dense Mode (PIM-DM) o Protocol-Independent Multicast - Dense Mode (PIM-DM)
o Protocol-Independent Multicast - Sparse Mode (PIM-SM) o Protocol-Independent Multicast - Sparse Mode (PIM-SM)
o Core-Based Trees (CBT) o Core-Based Trees (CBT)
Table of Contents FOREWORD
This document is introductory in nature. We have not attempted to
describe every detail of each protocol, rather to give a concise
overview in all cases, with enough specifics to allow a reader to grasp
the essential details and operation of protocols related to multicast
IP. Every effort has been made to ensure the accurate representation of
any cited works, especially any works-in-progress. For the complete
details, we refer you to the relevant specification(s).
If internet-drafts were cited in this document, it is only because they
were the only sources of certain technical information at the time of
this writing. We expect that many of the internet-drafts which we have
cited will eventually become RFCs. See the IETF's internet-drafts
shadow directories for the status of any of these drafts, their follow-
on drafts, or possibly the resulting RFCs.
TABLE OF CONTENTS
Section Section
1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . INTRODUCTION 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . INTRODUCTION
1.1 . . . . . . . . . . . . . . . . . . . . . . . . . Multicast Groups 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . Multicast Groups
1.2 . . . . . . . . . . . . . . . . . . . . . Group Membership Protocol 1.2 . . . . . . . . . . . . . . . . . . . . . Group Membership Protocol
1.3 . . . . . . . . . . . . . . . . . . . . Multicast Routing Protocols 1.3 . . . . . . . . . . . . . . . . . . . . Multicast Routing Protocols
1.3.1 . . . . . . . . . . . Multicast Routing vs. Multicast Forwarding 1.3.1 . . . . . . . . . . . Multicast Routing vs. Multicast Forwarding
2 . . . . . . . . MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS 2 . . . . . . . . MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS
2.1 . . . . . . . . . . . . . . . . . . . . . . . Reducing Network Load 2.1 . . . . . . . . . . . . . . . . . . . . . . . Reducing Network Load
2.2 . . . . . . . . . . . . . . . . . . . . . . . . Resource Discovery 2.2 . . . . . . . . . . . . . . . . . . . . . . . . Resource Discovery
2.3 . . . . . . . . . . . . . . . Support for Datacasting Applications 2.3 . . . . . . . . . . . . . . . Support for Datacasting Applications
3 . . . . . . . . . . . . . . THE INTERNET'S MULTICAST BACKBONE (MBone) 3 . . . . . . . . . . . . . . THE INTERNET'S MULTICAST BACKBONE (MBone)
4 . . . . . . . . . . . . . . . . . . . . . . . . MULTICAST ADDRESSING 4 . . . . . . . . . . . . . . . . . . . . . . . . MULTICAST ADDRESSING
4.1 . . . . . . . . . . . . . . . . . . . . . . . . Class D Addresses 4.1 . . . . . . . . . . . . . . . . . . . . . . . . Class D Addresses
4.2 . . . . . . . Mapping a Class D Address to an IEEE-802 MAC Address 4.2 . . . . . . . Mapping a Class D Address to an IEEE-802 MAC Address
4.3 . . . . . . . . . Transmission and Delivery of Multicast Datagrams 4.3 . . . . . . . . . Transmission and Delivery of Multicast Datagrams
5 . . . . . . . . . . . . . . INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) 5 . . . . . . . . . . . . . . INTERNET GROUP MANAGEMENT PROTOCOL (IGMP)
5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 1 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 1
5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 2 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 2
5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 3 5.3 . . . . . . . . . . . . . . . . . . . . . . IGMP Version 3 (Future)
6 . . . . . . . . . . . . . . . . . . . MULTICAST FORWARDING TECHNIQUES 6 . . . . . . . . . . . . . . . . . . . MULTICAST FORWARDING TECHNIQUES
6.1 . . . . . . . . . . . . . . . . . . . . . "Simpleminded" Techniques 6.1 . . . . . . . . . . . . . . . . . . . . . "Simpleminded" Techniques
6.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flooding 6.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flooding
6.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree 6.1.2 . . . . . . . . Multicast Extensions to MAC-layer Spanning Trees
6.2 . . . . . . . . . . . . . . . . . . . Source-Based Tree Techniques 6.2 . . . . . . . . . . . . . . . . . . . Source-Based Tree Techniques
6.2.1 . . . . . . . . . . . . . . . . . Reverse Path Broadcasting (RPB) 6.2.1 . . . . . . . . . . . . . . . . . Reverse Path Broadcasting (RPB)
6.2.1.1 . . . . . . . . . . . . . Reverse Path Broadcasting: Operation 6.2.1.1 . . . . . . . . . . . . . Reverse Path Broadcasting: Operation
6.2.1.2 . . . . . . . . . . . . . . . . . RPB: Benefits and Limitations 6.2.1.2 . . . . . . . . . . . . . . . . . RPB: Benefits and Limitations
6.2.2 . . . . . . . . . . . Truncated Reverse Path Broadcasting (TRPB) 6.2.2 . . . . . . . . . . . Truncated Reverse Path Broadcasting (TRPB)
6.2.3 . . . . . . . . . . . . . . . . . Reverse Path Multicasting (RPM) 6.2.3 . . . . . . . . . . . . . . . . . Reverse Path Multicasting (RPM)
6.2.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation 6.2.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation
6.2.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations 6.2.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations
6.3 . . . . . . . . . . . . . . . . . . . . . . Shared Tree Techniques 6.3 . . . . . . . . . . . . . . . . . . . . . . Shared Tree Techniques
6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation 6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation
6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits 6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits
skipping to change at page 3, line 23 skipping to change at page 2, line 53
6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation 6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation
6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits 6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits
6.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations 6.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations
7 . . . . . . . . . . . . . . . . . . . "DENSE MODE" ROUTING PROTOCOLS 7 . . . . . . . . . . . . . . . . . . . "DENSE MODE" ROUTING PROTOCOLS
7.1 . . . . . . . . Distance Vector Multicast Routing Protocol (DVMRP) 7.1 . . . . . . . . Distance Vector Multicast Routing Protocol (DVMRP)
7.1.1 . . . . . . . . . . . . . . . . . Physical and Tunnel Interfaces 7.1.1 . . . . . . . . . . . . . . . . . Physical and Tunnel Interfaces
7.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . Basic Operation 7.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . Basic Operation
7.1.3 . . . . . . . . . . . . . . . . . . . . . DVMRP Router Functions 7.1.3 . . . . . . . . . . . . . . . . . . . . . DVMRP Router Functions
7.1.4 . . . . . . . . . . . . . . . . . . . . . . . DVMRP Routing Table 7.1.4 . . . . . . . . . . . . . . . . . . . . . . . DVMRP Routing Table
7.1.5 . . . . . . . . . . . . . . . . . . . . . DVMRP Forwarding Table 7.1.5 . . . . . . . . . . . . . . . . . . . . . DVMRP Forwarding Table
7.1.6 . . . . . . . . . . . DVMRP Tree Building and Forwarding Summary
7.2 . . . . . . . . . . . . . . . Multicast Extensions to OSPF (MOSPF) 7.2 . . . . . . . . . . . . . . . Multicast Extensions to OSPF (MOSPF)
7.2.1 . . . . . . . . . . . . . . . . . . Intra-Area Routing with MOSPF 7.2.1 . . . . . . . . . . . . . . . . . . Intra-Area Routing with MOSPF
7.2.1.1 . . . . . . . . . . . . . . . . . . . . . Local Group Database 7.2.1.1 . . . . . . . . . . . . . . . . . . . . . Local Group Database
7.2.1.2 . . . . . . . . . . . . . . . . . Datagram's Shortest Path Tree 7.2.1.2 . . . . . . . . . . . . . . . . . Datagram's Shortest Path Tree
7.2.1.3 . . . . . . . . . . . . . . . . . . . . . . . Forwarding Cache 7.2.1.3 . . . . . . . . . . . . . . . . . . . . . . . Forwarding Cache
7.2.2 . . . . . . . . . . . . . . . . . . Mixing MOSPF and OSPF Routers 7.2.2 . . . . . . . . . . . . . . . . . . Mixing MOSPF and OSPF Routers
7.2.3 . . . . . . . . . . . . . . . . . . Inter-Area Routing with MOSPF 7.2.3 . . . . . . . . . . . . . . . . . . Inter-Area Routing with MOSPF
7.2.3.1 . . . . . . . . . . . . . . . . Inter-Area Multicast Forwarders 7.2.3.1 . . . . . . . . . . . . . . . . Inter-Area Multicast Forwarders
7.2.3.2 . . . . . . . . . . . Inter-Area Datagram's Shortest Path Tree 7.2.3.2 . . . . . . . . . . . Inter-Area Datagram's Shortest Path Tree
7.2.4 . . . . . . . . . Inter-Autonomous System Multicasting with MOSPF 7.2.4 . . . . . . . . . Inter-Autonomous System Multicasting with MOSPF
7.2.5 . . . . . . . . . . . MOSPF Tree Building and Forwarding Summary
7.3 . . . . . . . . . . . . . . . Protocol-Independent Multicast (PIM) 7.3 . . . . . . . . . . . . . . . Protocol-Independent Multicast (PIM)
7.3.1 . . . . . . . . . . . . . . . . . . . . PIM - Dense Mode (PIM-DM) 7.3.1 . . . . . . . . . . . . . . . . . . . . PIM - Dense Mode (PIM-DM)
7.3.1.1 . . . . . . . . . . PIM-DM Tree Building and Forwarding Summary
8 . . . . . . . . . . . . . . . . . . . "SPARSE MODE" ROUTING PROTOCOLS 8 . . . . . . . . . . . . . . . . . . . "SPARSE MODE" ROUTING PROTOCOLS
8.1 . . . . . . . Protocol-Independent Multicast - Sparse Mode (PIM-SM) 8.1 . . . . . . . Protocol-Independent Multicast - Sparse Mode (PIM-SM)
8.1.1 . . . . . . . . . . . . . . Directly Attached Host Joins a Group 8.1.1 . . . . . . . . . . . . . . Directly Attached Host Joins a Group
8.1.2 . . . . . . . . . . . . Directly Attached Source Sends to a Group 8.1.2 . . . . . . . . . . . . Directly Attached Source Sends to a Group
8.1.3 . . . . . . . Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? 8.1.3 . . . . . . . Shared Tree (RP-Tree) or Shortest Path Tree (SPT)?
8.1.4 . . . . . . . . . . . . . . . . . . . . . . . Unresolved Issues 8.1.4 . . . . . . . . . . . PIM-SM Tree Building and Forwarding Summary
8.2 . . . . . . . . . . . . . . . . . . . . . . Core Based Trees (CBT) 8.2 . . . . . . . . . . . . . . . . . . . . . . Core Based Trees (CBT)
8.2.1 . . . . . . . . . . . . . . . . . . Joining a Group's Shared Tree 8.2.1 . . . . . . . . . . . . . . . . . . Joining a Group's Shared Tree
8.2.2 . . . . . . . . . . . . . . . . . . . . . Data Packet Forwarding 8.2.2 . . . . . . . . . . . . . . . . . . . . . Data Packet Forwarding
8.2.3 . . . . . . . . . . . . . . . . . . . . . . . Non-Member Sending 8.2.3 . . . . . . . . . . . . . . . . . . . . . . . Non-Member Sending
8.2.4 . . . . . . . . . . . . . . . . . CBT Multicast Interoperability 8.2.4 . . . . . . . . . . . . CBT Tree Building and Forwarding Summary
9 . . . . . . INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS 8.2.5 . . . . . . . . . . . . . . . . . CBT Multicast Interoperability
9.1 . . . . . . . . . . . . . Requirements for Multicast Border Routers 9 . . . . . . . . . . . . . . . . MULTICAST IP ROUTING: RELATED TOPICS
9.1 . . . . . . Interoperability Framework For Multicast Border Routers
9.1.1 . . . . . . . . . . . . Requirements for Multicast Border Routers
9.2 . . . . . . . . . . . . . . . . Issues with Expanding-Ring Searches
10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES
10.1 . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs) 10.1 . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs)
10.2 . . . . . . . . . . . . . . . . . . . . . . . . . Internet-Drafts 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . Internet-Drafts
10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Textbooks 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Textbooks
10.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other 10.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other
11 . . . . . . . . . . . . . . . . . . . . . . SECURITY CONSIDERATIONS 11 . . . . . . . . . . . . . . . . . . . . . . SECURITY CONSIDERATIONS
12 . . . . . . . . . . . . . . . . . . . . . . . . . . ACKNOWLEDGEMENTS 12 . . . . . . . . . . . . . . . . . . . . . . . . . . ACKNOWLEDGEMENTS
13 . . . . . . . . . . . . . . . . . . . . . . . . . AUTHORS' ADDRESSES 13 . . . . . . . . . . . . . . . . . . . . . . . . . AUTHORS' ADDRESSES
[This space was intentionally left blank.]
1. INTRODUCTION 1. INTRODUCTION
There are three fundamental types of IPv4 addresses: unicast, There are three fundamental types of IPv4 addresses: unicast,
broadcast, and multicast. A unicast address is used to transmit a broadcast, and multicast. A unicast address is used to transmit a
packet to a single destination. A broadcast address is used to send a packet to a single destination. A broadcast address is used to send a
datagram to an entire subnetwork. A multicast address is designed to datagram to an entire subnetwork. A multicast address is designed to
enable the delivery of datagrams to a set of hosts that have been enable the delivery of datagrams to a set of hosts that have been
configured as members of a multicast group across various configured as members of a multicast group across various
subnetworks. subnetworks.
skipping to change at page 4, line 42 skipping to change at page 4, line 42
one multicast group at any given time and does not have to belong to a one multicast group at any given time and does not have to belong to a
group to send packets to members of a group. group to send packets to members of a group.
1.2 Group Membership Protocol 1.2 Group Membership Protocol
A group membership protocol is employed by routers to learn about the A group membership protocol is employed by routers to learn about the
presence of group members on their directly attached subnetworks. When presence of group members on their directly attached subnetworks. When
a host joins a multicast group, it transmits a group membership protocol a host joins a multicast group, it transmits a group membership protocol
message for the group(s) that it wishes to receive, and sets its IP message for the group(s) that it wishes to receive, and sets its IP
process and network interface card to receive frames addressed to the process and network interface card to receive frames addressed to the
multicast group. This receiver-initiated join process has excellent multicast group.
scaling properties since, as the multicast group increases in size, it
becomes ever more likely that a new group member will be able to locate
a nearby branch of the multicast delivery tree.
[This space was intentionally left blank.] 1.3 Multicast Routing Protocols
Multicast routers execute a multicast routing protocol to define
delivery paths that enable the forwarding of multicast datagrams
across an internetwork.
1.3.1 Multicast Routing vs. Multicast Forwarding
Multicast routing protocols establish or help establish the distribution
tree for a given group, which enables multicast forwarding of packets
addressed to the group. In the case of unicast, routing protocols are
also used to build a forwarding table (commonly called a routing table).
Unicast destinations are entered in the routing table, and associated
with a metric and a next-hop router toward the destination. The key
difference between unicast forwarding and multicast forwarding is that
======================================================================== ========================================================================
_ _ _ _ _ _ _ _
|_| |_| |_| |_| |_| |_| |_| |_|
'-' '-' '-' '-' '-' '-' '-' '-'
| | | | | | | |
<- - - - - - - - - -> <- - - - - - - - - ->
| |
| |
v v
Router Router
^ ^ ^
/ \ / \
_ ^ + + ^ _ + +
|_|-| / \ |-|_| _ | / \ | _
'_' | + + | '_' |_|-| + + |-|_|
'_' | / \ | '_'
_ | v v | _ _ | v v | _
|_|-|- - >|Router| <- + - + - + -> |Router|<- -|-|_| |_|-| - - ->|Router| <-+-+-+-> |Router|<- - - |-|_|
'_' | | '_' '_' | | '_'
_ | | _ _ | | _
|_|-| |-|_| |_|-| |-|_|
'_' | | '_' '_' | | '_'
v v | |
LEGEND LEGEND
<- - - -> Group Membership Protocol <- - - -> Group Membership Protocol
<-+-+-+-> Multicast Routing Protocol <-+-+-+-> Multicast Routing Protocol
Figure 1: Multicast IP Delivery Service Figure 1: Multicast IP Delivery Service
======================================================================= =======================================================================
1.3 Multicast Routing Protocols
Multicast routers execute a multicast routing protocol to define
delivery paths that enable the forwarding of multicast datagrams
across an internetwork.
1.3.1 Multicast Routing vs. Multicast Forwarding
Multicast routing protocols establish or help establish the distribution
tree for a given group, which enables multicast forwarding of packets
addressed to the group. In the case of unicast, routing protocols are
also used to build a forwarding table (commonly called a routing table).
Unicast destinations are entered in the routing table, and associated
with a metric and a next-hop router toward the destination. The key
difference between unicast forwarding and multicast forwarding is that
multicast packets must be forwarded away from their source. If a packet multicast packets must be forwarded away from their source. If a packet
is ever forwarded back toward its source, a forwarding loop could have is ever forwarded back toward its source, a forwarding loop could have
formed, possibly leading to a multicast "storm." formed, possibly leading to a multicast "storm."
Each routing protocol constructs a forwarding table in its own way; the Each routing protocol constructs a forwarding table in its own way. The
forwarding table tells each router that for a certain source, or for a forwarding table may tell each router that for a certain source, or for
given source sending to a certain group (called a (source, group) pair), a given source sending to a certain group (a (source, group) pair), or
packets are expected to arrive on a certain "inbound" or "upstream" for any source sending to some group, how to forward such packets. This
interface and must be copied to certain (set of) "outbound" or may take the form of rules expecting certain packets to arrive on some
"downstream" interface(s) in order to reach all known subnetworks with "inbound" or "upstream"interface, and the (set of) "downstream" or
group members. "outbound" interface(s) required to reach all known subnetworks with
group members. Not all multicast routing protocols use the same style
forwarding state, and some use techniques not mentioned here.
2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS 2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS
Today, the majority of Internet applications rely on point-to-point Today, the majority of Internet applications rely on point-to-point
transmission. The utilization of point-to-multipoint transmission has transmission. The utilization of point-to-multipoint transmission has
traditionally been limited to local area network applications. Over the traditionally been limited to local area network applications. Over the
past few years the Internet has seen a rise in the number of new past few years the Internet has seen a rise in the number of new
applications that rely on multicast transmission. Multicast IP applications that rely on multicast transmission. Multicast IP
conserves bandwidth by forcing the network to do packet replication only conserves bandwidth by forcing the network to do packet replication only
when necessary, and offers an attractive alternative to unicast when necessary, and offers an attractive alternative to unicast
skipping to change at page 6, line 50 skipping to change at page 6, line 42
application since it affects the CPU performance of each and every application since it affects the CPU performance of each and every
station that sees the packet. Besides, it wastes bandwidth. station that sees the packet. Besides, it wastes bandwidth.
2.2 Resource Discovery 2.2 Resource Discovery
Some applications utilize multicast instead of broadcast transmission Some applications utilize multicast instead of broadcast transmission
to transmit packets to group members residing on the same subnetwork. to transmit packets to group members residing on the same subnetwork.
However, there is no reason to limit the extent of a multicast However, there is no reason to limit the extent of a multicast
transmission to a single LAN. The time-to-live (TTL) field in the IP transmission to a single LAN. The time-to-live (TTL) field in the IP
header can be used to limit the range (or "scope") of a multicast header can be used to limit the range (or "scope") of a multicast
transmission. transmission. "Expanding ring searches" are one often-cited generic
multicast application, and they will be discussed in detail once the
mechanisms used by each multicast routing protocol have been described.
2.3 Support for Datacasting Applications 2.3 Support for Datacasting Applications
Since 1992, the IETF has conducted a series of "audiocast" experiments Since 1992, the IETF has conducted a series of "audiocast" experiments
in which live audio and video were multicast from the IETF meeting site in which live audio and video were multicast from the IETF meeting site
to destinations around the world. In this case, "datacasting" takes to destinations around the world. In this case, "datacasting" takes
compressed audio and video signals from the source station and transmits compressed audio and video signals from the source station and transmits
them as a sequence of UDP packets to a group address. Multicast them as a sequence of UDP packets to a group address. Multicast
delivery today is not limited to audio and video. Stock quote systems delivery today is not limited to audio and video. Stock quote systems
are one example of a (connectionless) data-oriented multicast are one example of a (connectionless) data-oriented multicast
skipping to change at page 7, line 29 skipping to change at page 7, line 20
3. THE INTERNET'S MULTICAST BACKBONE (MBone) 3. THE INTERNET'S MULTICAST BACKBONE (MBone)
The Internet Multicast Backbone (MBone) is an interconnected set of The Internet Multicast Backbone (MBone) is an interconnected set of
subnetworks and routers that support the delivery of IP multicast subnetworks and routers that support the delivery of IP multicast
traffic. The goal of the MBone is to construct a semipermanent IP traffic. The goal of the MBone is to construct a semipermanent IP
multicast testbed to enable the deployment of multicast applications multicast testbed to enable the deployment of multicast applications
without waiting for the ubiquitous deployment of multicast-capable without waiting for the ubiquitous deployment of multicast-capable
routers in the Internet. routers in the Internet.
The MBone has grown from 40 subnets in four different countries in 1992, The MBone has grown from 40 subnets in four different countries in 1992,
to more than 3400 subnets in over 25 countries by March 1997. With to more than 4300 subnets worldwide by July 1997. With new multicast
new multicast applications and multicast-based services appearing, it applications and multicast-based services appearing, it seems likely
seems likely that the use of multicast technology in the Internet will that the use of multicast technology in the Internet will keep growing
keep growing at an ever-increasing rate. at an ever-increasing rate.
The MBone is a virtual network that is layered on top of sections of the The MBone is a virtual network that is layered on top of sections of the
physical Internet. It is composed of islands of multicast routing physical Internet. It is composed of islands of multicast routing
capability connected to other islands by virtual point-to-point links capability connected to other islands, or "regions," by virtual point-
called "tunnels." The tunnels allow multicast traffic to pass through to-point links called "tunnels." The tunnels allow multicast traffic to
the non-multicast-capable parts of the Internet. Tunneled IP multicast pass through the non-multicast-capable parts of the Internet. Tunneled
packets are encapsulated as IP-over-IP (i.e., the protocol number is set IP multicast packets are encapsulated as IP-over-IP (i.e., the protocol
to 4) so they look like normal unicast packets to intervening routers. number is set to 4) so they are seen as regular unicast packets to the
The encapsulation is added on entry to a tunnel and stripped off on exit intervening routers. The encapsulating IP header is added on entry to a
from a tunnel. This set of multicast routers, their directly-connected tunnel and stripped off on exit. This set of multicast routers, their
subnetworks, and the interconnecting tunnels comprise the MBone. directly-connected subnetworks, and the interconnecting tunnels comprise
the MBone.
Since the MBone and the Internet have different topologies, multicast Since the MBone and the Internet have different topologies, multicast
routers execute a separate routing protocol to decide how to forward routers execute a separate routing protocol to decide how to forward
multicast packets. The majority of the MBone routers currently use the multicast packets. In some cases, this means that they include their
Distance Vector Multicast Routing Protocol (DVMRP), although some own internal unicast routing protocol, but in other cases the multicast
portions of the MBone execute either Multicast OSPF (MOSPF) or the routing protocol relies on the routing table provided by the underlying
Protocol-Independent Multicast (PIM) routing protocols. The operation unicast routing protocols.
of each of these protocols is discussed later in this paper.
The majority of the MBone regions are currently interconnected by the
Distance Vector Multicast Routing Protocol (DVMRP). Internally, the
regions may execute any routing protocol they choose, i.e., Multicast
extensions to OSPF (MOSPF), or the Protocol-Independent Multicast (PIM)
routing protocol(s), or the DVMRP.
As multicast routing software features become more widely available on
the routers of the Internet, providers may gradually decide to use
"native" multicast as an alternative to using lots of tunnels. "Native"
multicast happens when a region of routers (or set of regions) operates
======================================================================== ========================================================================
+++++++ +++++++
/ |Island | \ /|Region | \
/T/ | A | \T\ /T/ | A | \T\
/U/ +++++++ \U\ /U/ +++++++ \U\
/N/ | \N\ /N/ | \N\
/N/ | \N\ /N/ | \N\
/E/ | \E\ /E/ | \E\
/L/ | \L\ /L/ | \L\
++++++++ +++++++ ++++++++ ++++++++ ++++++++ ++++++++
| Island | | Island| ---------| Island | | Region | | Region | ---------| Region |
| B | | C | Tunnel | D | | B | | C | Tunnel | D |
++++++++ +++++++ --------- ++++++++ ++++++++\ ++++++++ --------- ++++++++
\ \ |
\T\ | \T\ |
\U\ | \U\ |
\N\ | \N\ |
\N\ +++++++ \N\ |
\E\ |Island | \E\ ++++++++
\L\| E | \L\| Region |
\ +++++++ \| E |
++++++++
Figure 2: Internet Multicast Backbone (MBone) Figure 2: Internet Multicast Backbone (MBone)
======================================================================== ========================================================================
As multicast routing software features become more widely available on without tunnels. All subnetworks are connected by at least one router
the routers of the Internet, providers may gradually decide to use which is capable of forwarding their multicast packets as required. In
"native" multicast as an alternative to using lots of tunnels. this case, tunnels are not required: Multicast packets just flow where
they need to.
The MBone carries audio and video multicasts of Internet Engineering The MBone carries audio and video multicasts of Internet Engineering
Task Force (IETF) meetings, NASA Space Shuttle Missions, US House and Task Force (IETF) meetings, NASA Space Shuttle Missions, US House and
Senate sessions, and live satellite weather photos. There are public Senate sessions, and other content. There are public and private
and private sessions on the MBone. Sessions that are menat for public sessions on the MBone. Sessions that are meant for public consumption
viewing or participation are announced via the session directory (SDR) are announced via the session directory (SDR) tool. Users of this tool
tool. A user of this tool can see a list of current and future public see a list of current and future public sessions, provided they are
sessions provided the user is within the administrative scope of the within the sender's "scope."
sender.
4. MULTICAST ADDRESSING 4. MULTICAST ADDRESSING
A multicast address is assigned to a set of receivers defining a A multicast address is assigned to a set of Internet hosts comprising a
multicast group. Senders use the multicast address as the destination multicast group. Senders use the multicast address as the destination
IP address of a packet that is to be transmitted to all group members. IP address of a packet that is to be transmitted to all group members.
4.1 Class D Addresses 4.1 Class D Addresses
An IP multicast group is identified by a Class D address. Class D An IP multicast group is identified by a Class D address. Class D
addresses have their high-order four bits set to "1110" followed by addresses have their high-order four bits set to "1110" followed by
a 28-bit multicast group ID. Expressed in standard "dotted-decimal" a 28-bit multicast group ID. Expressed in standard "dotted-decimal"
notation, multicast group addresses range from 224.0.0.0 to notation, multicast group addresses range from 224.0.0.0 to
239.255.255.255 (shorthand: 224.0.0.0/4). 239.255.255.255 (shorthand: 224.0.0.0/4).
Figure 3 shows the format of a 32-bit Class D address. Figure 3 shows the format of a 32-bit Class D address:
======================================================================== ========================================================================
0 1 2 3 31 0 1 2 3 31
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|1|1|0| Multicast Group ID | |1|1|1|0| Multicast Group ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|------------------------28 bits------------------------| |------------------------28 bits------------------------|
Figure 3: Class D Multicast Address Format Figure 3: Class D Multicast Address Format
skipping to change at page 9, line 47 skipping to change at page 9, line 47
"all systems on this subnet" 224.0.0.1 "all systems on this subnet" 224.0.0.1
"all routers on this subnet" 224.0.0.2 "all routers on this subnet" 224.0.0.2
"all DVMRP routers" 224.0.0.4 "all DVMRP routers" 224.0.0.4
"all OSPF routers" 224.0.0.5 "all OSPF routers" 224.0.0.5
"all OSPF designated routers" 224.0.0.6 "all OSPF designated routers" 224.0.0.6
"all RIP2 routers" 224.0.0.9 "all RIP2 routers" 224.0.0.9
"all PIM routers" 224.0.0.13 "all PIM routers" 224.0.0.13
"all CBT routers" 224.0.0.15 "all CBT routers" 224.0.0.15
The remaining groups ranging from 224.0.1.0 to 239.255.255.255 are The remaining groups ranging from 224.0.1.0 to 239.255.255.255 are
assigned to various multicast applications or remain unassigned. From either permanently assigned to various multicast applications or are
this range, the addresses from 239.0.0.0 to 239.255.255.255 are being available for dynamic assignment (via SDR or other methods). From this
reserved for various "administratively scoped" applications, not range, the addresses from 239.0.0.0 to 239.255.255.255 are reserved for
necessarily Internet-wide applications. various "administratively scoped" applications, within private networks,
not necessarily Internet-wide applications.
The complete list may be found in the Assigned Numbers RFC (RFC 1700 or The complete list may be found in the Assigned Numbers RFC (RFC 1700 or
its successor) or at the IANA Web Site: its successor) or at the assignments page of the IANA Web Site:
<URL:http://www.isi.edu/div7/iana/assignments.html> <URL:http://www.iana.org/iana/assignments.html>
4.2 Mapping a Class D Address to an IEEE-802 MAC Address 4.2 Mapping a Class D Address to an IEEE-802 MAC Address
The IANA has been allocated a reserved portion of the IEEE-802 MAC-layer The IANA has been allocated a reserved portion of the IEEE-802 MAC-layer
multicast address space. All of the addresses in IANA's reserved block multicast address space. All of the addresses in IANA's reserved block
begin with 01-00-5E (hex); to be clear, the range from 01-00-5E-00-00-00 begin with 01-00-5E (hex); to be clear, the range from 01-00-5E-00-00-00
to 01-00-5E-FF-FF-FF is reserved for IP multicast groups. to 01-00-5E-FF-FF-FF is used for IP multicast groups.
A simple procedure was developed to map Class D addresses to this A simple procedure was developed to map Class D addresses to this
reserved MAC-layer multicast address block. This allows IP multicasting reserved MAC-layer multicast address block. This allows IP multicasting
to easily take advantage of the hardware-level multicasting supported by to easily take advantage of the hardware-level multicasting supported by
network interface cards. network interface cards.
The mapping between a Class D IP address and an IEEE-802 (e.g., FDDI, The mapping between a Class D IP address and an IEEE-802 (e.g., FDDI,
Ethernet) MAC-layer multicast address is obtained by placing the low- Ethernet) MAC-layer multicast address is obtained by placing the low-
order 23 bits of the Class D address into the low-order 23 bits of order 23 bits of the Class D address into the low-order 23 bits of
IANA's reserved MAC-layer multicast address block. This simple IANA's reserved MAC-layer multicast address block. This simple
skipping to change at page 10, line 38 skipping to change at page 10, line 38
know this simple transformation, and can easily send any IP multicast know this simple transformation, and can easily send any IP multicast
over any IEEE-802-based LAN. over any IEEE-802-based LAN.
Figure 4 illustrates how the multicast group address 234.138.8.5 Figure 4 illustrates how the multicast group address 234.138.8.5
(or EA-8A-08-05 expressed in hex) is mapped into an IEEE-802 multicast (or EA-8A-08-05 expressed in hex) is mapped into an IEEE-802 multicast
address. Note that the high-order nine bits of the IP address are not address. Note that the high-order nine bits of the IP address are not
mapped into the MAC-layer multicast address. mapped into the MAC-layer multicast address.
The mapping in Figure 4 places the low-order 23 bits of the IP multicast The mapping in Figure 4 places the low-order 23 bits of the IP multicast
group ID into the low order 23 bits of the IEEE-802 multicast address. group ID into the low order 23 bits of the IEEE-802 multicast address.
Note that the mapping may place up to multiple IP groups into the same Because the class D space has more groups (2^28) than the IETF's OUI may
IEEE-802 address because the upper five bits of the IP class D address contain at the MAC layer (2^23), multiple group addresses map to each
are not used. Thus, there is a 32-to-1 ratio of IP class D addresses to IEEE-802 address. To be precise, 32 class D addresses map to each MAC-
valid MAC-layer multicast addresses. In practice, there is a small layer multicast addresses. If two or more groups on a LAN have, by some
chance of collisions, should multiple groups happen to pick class D astronomical coincidence, chosen class D addresses which map to the same
addresses that map to the same MAC-layer multicast address. However, MAC-layer multicast address, then the member hosts will receive traffic
chances are that higher-layer protocols will let hosts interpret which for all those groups. It will then be up to their IP-layer software to
packets are for them (i.e., the chances of two different groups picking interpret which packets are for the group(s) to which they really belong.
the same class D address and the same set of UDP ports is extremely The class D addresses 224.10.8.5 (E0-0A-08-05) and 225.138.8.5 (E1-8A-
unlikely). For example, the class D addresses 224.10.8.5 (E0-0A-08-05) 08-05) are shown to map to the same IEEE-802 MAC-layer multicast address
and 225.138.8.5 (E1-8A-08-05) map to the same IEEE-802 MAC-layer (01-00-5E-0A-08-05).
multicast address (01-00-5E-0A-08-05) used in this example.
4.3 Transmission and Delivery of Multicast Datagrams
When the sender and receivers are members of the same (LAN) subnetwork,
the transmission and reception of multicast frames is a straightforward
process. The source station simply addresses the IP packet to the
multicast group, the network interface card maps the Class D address to
======================================================================== ========================================================================
Class D Address: 234.138.8.5 (EA-8A-08-05) Class D Addresses:
234.138.8.5 (EA-8A-08-05) 1110 1010 1000 1010 0000 1000 0000 0101
224.10.8.5 (E0-0A-08-05) 1110 0000 0000 1010 0000 1000 0000 0101
225.138.8.5 (E1-8A-08-05) 1110 0001 1000 1010 0000 1000 0000 0101
**** '''' '^^^ ^^^^ ^^^^ ^^^^ ^^^^ ^^^^
| E A | 8 | E A | 8
Class-D IP |_______ _______|__ _ _ _ Class-D IP |_______ _______|__ _ _ _
Address |-+-+-+-+-+-+-+-|-+ - - - Address |-+-+-+-+-+-+-+-|-+ - - -
|1 1 1 0 1 0 1 0|1 |1 1 1 0 1 0 1 0|1
|-+-+-+-+-+-+-+-|-+ - - - |-+-+-+-+-+-+-+-|-+ - - -
................... ...................
IEEE-802 ....not......... IEEE-802 ....not.........
MAC-Layer .............. MAC-Layer ..............
Multicast ....mapped.. Multicast ....mapped..
skipping to change at page 12, line 5 skipping to change at page 12, line 5
- - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| IEEE-802 - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| IEEE-802
| 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| MAC-Layer | 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| MAC-Layer
- - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Multicast - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Multicast
|_______ _______|_______ _______|_______ _______| Address |_______ _______|_______ _______|_______ _______| Address
| 0 A | 0 8 | 0 5 | | 0 A | 0 8 | 0 5 |
Figure 4: Mapping between Class D and IEEE-802 Multicast Addresses Figure 4: Mapping between Class D and IEEE-802 Multicast Addresses
======================================================================== ========================================================================
4.3 Transmission and Delivery of Multicast Datagrams
When the sender and receivers are members of the same (LAN) subnetwork,
the transmission and reception of multicast frames is a straightforward
process. The source station simply addresses the IP packet to the
multicast group, the network interface card maps the Class D address to
the corresponding IEEE-802 multicast address, and the frame is sent. the corresponding IEEE-802 multicast address, and the frame is sent.
Receivers that wish to capture the frame notify their MAC and IP layers Internet hosts that need to receive selected multicast frames, whether
that they want to receive datagrams addressed to the group. because a user has executed a multicast application, or because the
host's IP stack is required to receive certain groups (e.g., 224.0.0.1,
the "all-hosts" group), notify their driver software of which group
addresses to filter.
Things become somewhat more complex when the sender is attached to one Things become somewhat more complex when the sender is attached to one
subnetwork and receivers reside on different subnetworks. In this case, subnetwork and receivers reside on different subnetworks. In this case,
the routers must implement a multicast routing protocol that permits the the routers must implement a multicast routing protocol that permits the
construction of multicast delivery trees and supports multicast packet construction of multicast delivery trees and supports multicast packet
forwarding. In addition, each router needs to implement a group forwarding. In addition, each router needs to implement a group
membership protocol that allows it to learn about the existence of group membership protocol that allows it to learn about the existence of group
members on its directly attached subnetworks. members on its directly attached subnetworks.
5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) 5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP)
skipping to change at page 12, line 36 skipping to change at page 12, line 33
their immediately-neighboring multicast routers. The mechanisms of the their immediately-neighboring multicast routers. The mechanisms of the
protocol allow a host to inform its local router that it wishes to protocol allow a host to inform its local router that it wishes to
receive transmissions addressed to a specific multicast group. Also, receive transmissions addressed to a specific multicast group. Also,
routers periodically query the LAN to determine if any group members are routers periodically query the LAN to determine if any group members are
still active. If there is more than one IP multicast router on the LAN, still active. If there is more than one IP multicast router on the LAN,
one of the routers is elected "querier" and assumes the responsibility of one of the routers is elected "querier" and assumes the responsibility of
querying the LAN for the presence of any group members. querying the LAN for the presence of any group members.
Based on the group membership information learned from the IGMP, a Based on the group membership information learned from the IGMP, a
router is able to determine which (if any) multicast traffic needs to be router is able to determine which (if any) multicast traffic needs to be
forwarded to each of its "leaf" subnetworks. Multicast routers use this forwarded to each of its "leaf" subnetworks. "Leaf" subnetworks are
information, in conjunction with a multicast routing protocol, to those that have no further downstream routers; they either contain
support IP multicasting across the Internet. receivers for some set of groups, or they do not. Multicast routers use
the information derived from IGMP, along with a multicast routing
protocol, to support IP multicasting across the MBone.
5.1 IGMP Version 1 5.1 IGMP Version 1
IGMP Version 1 was specified in RFC-1112. According to the IGMP Version 1 was specified in RFC-1112. According to the
specification, multicast routers periodically transmit Host Membership specification, multicast routers periodically transmit Host Membership
Query messages to determine which host groups have members on their Query messages to determine which host groups have members on their
directly-attached networks. IGMP Query messages are addressed to the directly-attached networks. IGMP Query messages are addressed to the
all-hosts group (224.0.0.1) and have an IP TTL = 1. This means that all-hosts group (224.0.0.1) and have an IP TTL = 1. This means that
Query messages sourced from a router are transmitted onto the Query messages sourced from a router are transmitted onto the directly-
directly-attached subnetwork but are not forwarded by any other attached subnetwork, but are not forwarded by any other multicast
multicast routers. routers.
When a host receives an IGMP Query message, it responds with a Host When a host receives an IGMP Query message, it responds with a Host
Membership Report for each group to which it belongs, sent to each group Membership Report for each group to which it belongs, sent to each group
to which it belongs. (This is an important point: While IGMP Queries to which it belongs. (This is an important point: IGMP Queries are
sent to the "all hosts on this subnet" class D address (224.0.0.1), but
IGMP Reports are sent to the group(s) to which the host(s) belong. IGMP
Reports, like Queries, are sent with the IP TTL = 1, and thus are not
forwarded beyond the local subnetwork.)
======================================================================== ========================================================================
Group 1 _____________________ Group 1 ____________________
____ ____ | multicast | ____ ____ | multicast |
| | | | | router | | | | | | router |
|_H2_| |_H4_| |_____________________| |_H2_| |_H4_| |____________________|
---- ---- +-----+ | ---- ---- +-----+ |
| | <-----|Query| | | | <-----|Query| |
| | +-----+ | | | +-----+ |
| | | | | |
|---+----+-------+-------+--------+-----------------------+----| |-----+--+---------+-----+----------+--------------------+----|
| | | | | |
| | | | | |
____ ____ ____ ____ ____ ____
| | | | | | | | | | | |
|_H1_| |_H3_| |_H5_| |_H1_| |_H3_| |_H5_|
---- ---- ---- ---- ---- ----
Group 2 Group 1 Group 1 Group 2 Group 1 Group 1
Group 2 Group 2
Figure 5: Internet Group Management Protocol-Query Message Figure 5: Internet Group Management Protocol-Query Message
======================================================================== ========================================================================
are sent to the "all hosts on this subnet" class D address (224.0.0.1),
IGMP Reports are sent to the group(s) to which the host(s) belong.
IGMP Reports, like Queries, are sent with the IP TTL = 1, and thus are
not forwarded beyond the local subnetwork.)
In order to avoid a flurry of Reports, each host starts a randomly- In order to avoid a flurry of Reports, each host starts a randomly-
chosen Report delay timer for each of its group memberships. If, during chosen Report delay timer for each of its group memberships. If, during
the delay period, another Report is heard for the same group, every the delay period, another Report is heard for the same group, every
other host in that group must reset its timer to a new random value. other host in that group must reset its timer to a new random value.
This procedure spreads Reports out over a period of time and thus This procedure spreads Reports out over a period of time and thus
minimizes Report traffic for each group that has at least one member on minimizes Report traffic for each group that has at least one member on
a given subnetwork. a given subnetwork.
It should be noted that multicast routers do not need to be directly It should be noted that multicast routers do not need to be directly
addressed since their interfaces are required to promiscuously receive addressed since their interfaces are required to promiscuously receive
skipping to change at page 14, line 17 skipping to change at page 14, line 17
to additional group members further downstream), this interface is to additional group members further downstream), this interface is
removed from the delivery tree(s) for this group. Multicasts will removed from the delivery tree(s) for this group. Multicasts will
continue to be sent on this interface only if the router can tell (via continue to be sent on this interface only if the router can tell (via
multicast routing protocols) that there are additional group members multicast routing protocols) that there are additional group members
further downstream reachable via this interface. further downstream reachable via this interface.
When a host first joins a group, it immediately transmits an IGMP Report When a host first joins a group, it immediately transmits an IGMP Report
for the group rather than waiting for a router's IGMP Query. This for the group rather than waiting for a router's IGMP Query. This
reduces the "join latency" for the first host to join a given group on reduces the "join latency" for the first host to join a given group on
a particular subnetwork. "Join latency" is measured from the time when a particular subnetwork. "Join latency" is measured from the time when
a host's first IGMP Report is sent, until the transmission of the first a host's first IGMP Report is sent, until the first packet for that
packet for that group onto that host's subnetwork. Of course, if the group arrives on that host's subnetwork. Of course, if the group is
group is already active, the join latency is precisely zero. already active, the join latency is negligible.
5.2 IGMP Version 2 5.2 IGMP Version 2
IGMP version 2 was distributed as part of the Distance Vector Multicast IGMP version 2 was distributed as part of the Distance Vector Multicast
Routing Protocol (DVMRP) implementation ("mrouted") source code, from Routing Protocol (DVMRP) implementation ("mrouted") source code, from
version 3.3 through 3.8. Initially, there was no detailed specification version 3.3 through 3.8. Initially, there was no detailed specification
for IGMP version 2 other than this source code. However, the complete for IGMP version 2 other than this source code. However, the complete
specification has recently been published in <draft-ietf-idmr-igmp- specification has recently been published in <draft-ietf-idmr-igmp-
v2-06.txt> which will update the specification contained in the first v2-06.txt>, a work-in-progress which will update the specification
appendix of RFC-1112. IGMP version 2 extends IGMP version 1 while contained in the first appendix of RFC-1112. IGMP version 2 extends
maintaining backward compatibility with version 1 hosts. IGMP version 1 while maintaining backward compatibility with version 1
hosts.
IGMP version 2 defines a procedure for the election of the multicast IGMP version 2 defines a procedure for the election of the multicast
querier for each LAN. In IGMP version 2, the multicast router with the querier for each LAN. In IGMP version 2, the multicast router with the
lowest IP address on the LAN is elected the multicast querier. In IGMP lowest IP address on the LAN is elected the multicast querier. In IGMP
version 1, the querier election was determined by the multicast routing version 1, the querier election was determined by the multicast routing
protocol. protocol.
IGMP version 2 defines a new type of Query message: the Group-Specific IGMP version 2 defines a new type of Query message: the Group-Specific
Query. Group-Specific Query messages allow a router to transmit a Query Query. Group-Specific Query messages allow a router to transmit a Query
to a specific multicast group rather than all groups residing on a to a specific multicast group rather than all groups residing on a
directly attached subnetwork. directly attached subnetwork.
Finally, IGMP version 2 defines a Leave Group message to lower IGMP's Finally, IGMP version 2 defines a Leave Group message to lower IGMP's
"leave latency." When the last host to respond to a Query with a Report "leave latency." When a host wishes to leave that specific group, the
wishes to leave that specific group, the host transmits a Leave Group host transmits an IGMPv2 "Leave Group" message to the all-routers group
message to the all-routers group (224.0.0.2) with the group field set to (224.0.0.2), with the group field set to the group being left. After
the group being left. In response to a Leave Group message, the router receiving a Leave Group message, the IGMPv2-elected querier must find
begins the transmission of Group-Specific Query messages on the out if this was the last member of this group on this subnetwork. To do
interface that received the Leave Group message. If there are no this, the router begins transmitting Group-Specific Query messages on
the interface that received the Leave Group message. If it hears no
Reports in response to the Group-Specific Query messages, then (if this Reports in response to the Group-Specific Query messages, then (if this
is a leaf subnet) this interface is removed from the delivery tree(s) is a leaf subnet) this interface is removed from the delivery tree(s)
for this group (as was the case of IGMP version 1). Again, multicasts for this group (as was the case of IGMP version 1). Even if there are
will continue to be sent on this interface if the router can tell (via
multicast routing protocols) that there are additional group members no group members on this subnetwork, multicasts must continue to be sent
further downstream reachable via this interface. onto it if the router can tell that additional group members are further
away from the source (i.e., "downstream") that are reachable via other
routers attached to this subnetwork.
"Leave latency" is measured from a router's perspective. In version 1 "Leave latency" is measured from a router's perspective. In version 1
of IGMP, leave latency was the time from a router's hearing the last of IGMP, leave latency was the time from a router's hearing the last
Report for a given group, until the router aged out that interface from Report for a given group, until the router aged out that interface from
the delivery tree for that group (assuming this is a leaf subnet, of the delivery tree for that group (assuming this is a leaf subnet, of
course). Note that the only way for the router to tell that this was course). Note that the only way for the router to tell that this was
the LAST group member is that no reports are heard in some multiple of the LAST group member is that no reports are heard in some multiple of
the Query Interval (this is on the order of minutes). IGMP version 2, the Query Interval (this is on the order of minutes). IGMP version 2,
with the addition of the Leave Group message, allows a group member to with the addition of the Leave Group message, allows a group member to
more quickly inform the router that it is done receiving traffic for a more quickly inform the router that it is done receiving traffic for a
group. The router then must determine if this host was the last member group. The router then must determine if this host was the last member
of this group on this subnetwork. To do this, the router quickly of this group on this subnetwork. To do this, the router quickly
queries the subnetwork for other group members via the Group-Specific queries the subnetwork for other group members via the Group-Specific
Query message. If no members send reports after several of these Group- Query message. If no members send reports after several of these Group-
Specific Queries, the router can infer that the last member of that Specific Queries, the router can infer that the last member of that
group has, indeed, left the subnetwork. The benefit of lowering the group has, indeed, left the subnetwork.
leave latency is that prune messages can be sent as soon as possible
after the last member host drops out of the group, instead of having to The benefit of lowering the leave latency is that the router can quickly
wait for several minutes worth of Query intervals to pass. If a group use its multicast routing protocol to inform its "upstream" neighbors
("upstream" is the direction that a router considers to be toward a
source), allowing the delivery tree for this group to quickly adapt to
the new shape of the group (without this subnetwork and the branch that
lead to it). The alternative was having to wait for several rounds of
unanswered Queries to pass (on the order of minutes). If a group
was experiencing high traffic levels, it can be very beneficial to stop was experiencing high traffic levels, it can be very beneficial to stop
transmitting data for this group as soon as possible. transmitting data for this group as soon as possible.
5.3 IGMP Version 3 5.3 IGMP Version 3 (Future)
IGMP version 3 is a preliminary draft specification published in IGMP version 3 is a preliminary work-in-progress published in <draft-
<draft-cain-igmp-00.txt>. IGMP version 3 introduces support for Group- cain-igmp-00.txt>. IGMP version 3 as it is currently defined will
Source Report messages so that a host can elect to receive traffic from introduce support for Group-Source Report messages so that a host may
specific sources of a multicast group. An Inclusion Group-Source Report elect to receive traffic only from specific multicast within a group. An
message allows a host to specify the IP addresses of the specific Inclusion Group-Source Report message will allow a host to specify the
sources it wants to receive. An Exclusion Group-Source Report message IP address(es) of the specific sources it wants to receive, and an
allows a host to explicitly identify the sources that it does not want Exclusion Group-Source Report message will allow a host to explicitly
to receive. With IGMP version 1 and version 2, if a host wants to ask that traffic from some (list of) sources be blocked. With IGMP
receive any traffic for a group, the traffic from all sources for the versions 1 and 2, if a host wants to receive any traffic for a group,
group must be forwarded onto the host's subnetwork. then traffic from all the group's sources will be forwarded onto the
host's subnetwork.
IGMP version 3 will help conserve bandwidth by allowing a host to select IGMP version 3 will help conserve bandwidth by allowing a host to select
the specific sources from which it wants to receive traffic. Also, only the specific sources from which it wants to receive traffic. Also,
multicast routing protocols will be able to make use this information to multicast routing protocols will be able to use this information to help
conserve bandwidth when constructing the branches of their multicast conserve bandwidth when constructing the branches of their multicast
delivery trees. delivery trees.
Finally, support for Leave Group messages first introduced in IGMP Finally, support for Leave Group messages, first introduced in IGMPv2,
version 2 has been enhanced to support Group-Source Leave messages. has been enhanced to support Group-Source Leave messages. This feature
This feature allows a host to leave an entire group or to specify the will allow a host to leave an entire group, or to specify the specific
specific IP address(es) of the (source, group) pair(s) that it wishes IP address(es) of the (source, group) pair(s) that it wishes to leave.
to leave. Note that at this time, not all existing multicast routing Note that at this time, not all existing multicast routing protocols can
protocols have mechanisms to support such requests from group members. support such requests. This is one issue that needs to be addressed
This is one issue that will be addressed during the development of during the development of IGMP version 3.
IGMP version 3.
6. MULTICAST FORWARDING TECHNIQUES 6. MULTICAST FORWARDING TECHNIQUES
IGMP provides the final step in a multicast packet delivery service IGMP provides the final step in a multicast packet delivery service
since it is only concerned with the forwarding of multicast traffic from since it is only concerned with the forwarding of multicast traffic from
a router to group members on its directly-attached subnetworks. IGMP is a router to group members on its directly-attached subnetworks. IGMP is
not concerned with the delivery of multicast packets between neighboring not concerned with the delivery of multicast packets between neighboring
routers or across an internetwork. routers or across an internetwork.
To provide an internetwork delivery service, it is necessary to define To provide an internetwork delivery service, it is necessary to define
multicast routing protocols. A multicast routing protocol is multicast routing protocols. A multicast routing protocol is
responsible for the construction of multicast delivery trees and responsible for the construction of multicast delivery trees and
enabling multicast packet forwarding. This section explores a number of enabling multicast packet forwarding. This section explores a number of
different techniques that may potentially be employed by multicast different techniques that may potentially be employed by multicast
routing protocols: routing protocols:
o "Simpleminded" Techniques o "Simpleminded" Techniques
- Flooding - Flooding
- Spanning Trees - Multicast Extensions to MAC-layer Spanning Trees
o Source-Based Tree (SBT) Techniques o Source-Based Tree (SBT) Techniques
- Reverse Path Broadcasting (RPB) - Reverse Path Broadcasting (RPB)
- Truncated Reverse Path Broadcasting (TRPB) - Truncated Reverse Path Broadcasting (TRPB)
- Reverse Path Multicasting (RPM) - Reverse Path Multicasting (RPM)
o "Shared-Tree" Techniques o "Shared-Tree" Techniques
Later sections will describe how these algorithms are implemented in the Later sections will describe how these algorithms are implemented in the
most prevalent multicast routing protocols in the Internet today (e.g., most prevalent multicast routing protocols in the Internet today (e.g.,
Distance Vector Multicast Routing Protocol (DVMRP), Multicast extensions Distance Vector Multicast Routing Protocol (DVMRP), Multicast extensions
to OSPF (MOSPF), Protocol-Independent Multicast (PIM), and Core-Based to OSPF (MOSPF), Protocol-Independent Multicast (PIM), and Core-Based
Trees (CBT). Trees (CBT).
6.1 "Simpleminded" Techniques 6.1 "Simpleminded" Techniques
Flooding and Spanning Trees are two algorithms that can be used to build Flooding and Multicast Extensions to MAC-layer Spanning Trees are two
primitive multicast routing protocols. The techniques are primitive due algorithms that could be used to build primitive multicast routing
to the fact that they tend to waste bandwidth or require a large amount protocols. The techniques are primitive because they tend to waste
of computational resources within the multicast routers involved. Also, bandwidth or require a large amount of computational resources within
protocols built on these techniques may work for small networks with few the participating multicast routers. Also, protocols built on these
senders, groups, and routers, but do not scale well to larger numbers of techniques may work for small networks with few senders, groups, and
senders, groups, or routers. Also, the ability to handle arbitrary routers, but do not scale well to larger numbers of senders, groups, or
topologies may not be present or may only be present in limited ways.
routers. Finally, the ability to handle arbitrary topologies may be
absent, or may only be present in limited ways.
6.1.1 Flooding 6.1.1 Flooding
The simplest technique for delivering multicast datagrams to all routers The simplest technique for delivering multicast datagrams to all routers
in an internetwork is to implement a flooding algorithm. The flooding in an internetwork is to implement a flooding algorithm. The flooding
procedure begins when a router receives a packet that is addressed to a procedure begins when a router receives a packet that is addressed to a
multicast group. The router employs a protocol mechanism to determine multicast group. The router employs a protocol mechanism to determine
whether or not it has seen this particular packet before. If it is the whether or not it has seen this particular packet before. If it is the
first reception of the packet, the packet is forwarded on all interfaces first reception of the packet, the packet is forwarded on all interfaces
(except the one on which it arrived) guaranteeing that the multicast (except the one on which it arrived) guaranteeing that the multicast
packet reaches all routers in the internetwork. If the router has seen packet reaches all routers in the internetwork. If the router has seen
the packet before, then the packet is discarded. the packet before, then the packet is discarded.
A flooding algorithm is very simple to implement since a router does not A flooding algorithm is very simple to implement since a router does not
have to maintain a routing table and only needs to keep track of the have to maintain a routing table and only needs to keep track of the
most recently seen packets. However, flooding does not scale for most recently seen packets. However, flooding does not scale for
Internet-wide applications since it generates a large number of Internet-wide applications since it generates a large number of
duplicate packets and uses all available paths across the internetwork duplicate packets and uses all available paths across the internetwork
instead of just a limited number. Also, the flooding algorithm makes instead of just a limited number. Also, the flooding algorithm makes
skipping to change at page 17, line 18 skipping to change at page 17, line 29
A flooding algorithm is very simple to implement since a router does not A flooding algorithm is very simple to implement since a router does not
have to maintain a routing table and only needs to keep track of the have to maintain a routing table and only needs to keep track of the
most recently seen packets. However, flooding does not scale for most recently seen packets. However, flooding does not scale for
Internet-wide applications since it generates a large number of Internet-wide applications since it generates a large number of
duplicate packets and uses all available paths across the internetwork duplicate packets and uses all available paths across the internetwork
instead of just a limited number. Also, the flooding algorithm makes instead of just a limited number. Also, the flooding algorithm makes
inefficient use of router memory resources since each router is required inefficient use of router memory resources since each router is required
to maintain a distinct table entry for each recently seen packet. to maintain a distinct table entry for each recently seen packet.
6.1.2 Spanning Tree 6.1.2 Multicast Extensions to MAC-layer Spanning Trees
A more effective solution than flooding would be to select a subset of A more effective solution than flooding would be to select a subset of
the internetwork topology which forms a spanning tree. The spanning the internetwork topology which forms a spanning tree. The spanning
tree defines a structure in which only one active path connects any two tree defines a structure in which only one active path connects any two
routers of the internetwork. Figure 6 shows an internetwork and a routers of the internetwork. Figure 6 shows an internetwork and a
spanning tree rooted at router RR. spanning tree rooted at router RR.
Once the spanning tree has been built, a multicast router simply Once the spanning tree has been built, a multicast router simply
forwards each multicast packet to all interfaces that are part of the forwards each multicast packet to all interfaces that are part of the
spanning tree except the one on which the packet originally arrived. spanning tree except the one on which the packet originally arrived.
skipping to change at page 17, line 41 skipping to change at page 17, line 52
routers in the internetwork. routers in the internetwork.
A spanning tree solution is powerful and would be relatively easy to A spanning tree solution is powerful and would be relatively easy to
implement since there is a great deal of experience with spanning tree implement since there is a great deal of experience with spanning tree
protocols in the Internet community. However, a spanning tree solution protocols in the Internet community. However, a spanning tree solution
can centralize traffic on a small number of links, and may not provide can centralize traffic on a small number of links, and may not provide
the most efficient path between the source subnetwork and group members. the most efficient path between the source subnetwork and group members.
Also, it is computationally difficult to compute a spanning tree in Also, it is computationally difficult to compute a spanning tree in
large, complex topologies. large, complex topologies.
6.2 Source-Based Tree Techniques [This space was intentionally left blank.]
The following techniques all generate a source-based tree by various
means. The techniques differ in the efficiency of the tree building
process, and the bandwidth and router resources (i.e., state tables)
used to build a source-based tree.
6.2.1 Reverse Path Broadcasting (RPB)
A more efficient solution than building a single spanning tree for the
entire internetwork would be to build a spanning tree for each potential
source [subnetwork]. These spanning trees would result in source-based
delivery trees emanating from the subnetworks directly connected to the
======================================================================== ========================================================================
A Sample Internetwork A Sample Internetwork:
#----------------# #----------------#
/ |\ / \ / |\ / \
| | \ / \ | | \ / \
| | \ / \ | | \ / \
| | \ / \ | | \ / \
| | \ / \ | | \ / \
| | #------# \ | | #------# \
| | / | \ \ | | / | \ \
| | / | \ \ | | / | \ \
skipping to change at page 18, line 29 skipping to change at page 18, line 29
| \ / | -----/| | \ / | -----/|
| #-----------#----/ | | #-----------#----/ |
| /|\--- --/| \ | | /|\--- --/| \ |
| / | \ / \ \ | | / | \ / \ \ |
| / \ /\ | \ / | / \ /\ | \ /
| / \ / \ | \ / | / \ / \ | \ /
#---------#-- \ | ----# #---------#-- \ | ----#
\ \ | / \ \ | /
\--- #-/ \--- #-/
A Spanning Tree for this Sample Internetwork One Possible Spanning Tree for this Sample Internetwork:
# # # #
\ / \ /
\ / \ /
\ / \ /
\ / \ /
\ / \ /
#------RR #------RR
| \ | \
| \ | \
skipping to change at page 19, line 5 skipping to change at page 19, line 5
| |
# #
LEGEND LEGEND
# Router # Router
RR Root Router RR Root Router
Figure 6: Spanning Tree Figure 6: Spanning Tree
======================================================================== ========================================================================
6.2 Source-Based Tree Techniques
The following techniques all generate a source-based tree by various
means. The techniques differ in the efficiency of the tree building
process, and the bandwidth and router resources (i.e., state tables)
used to build a source-based tree.
6.2.1 Reverse Path Broadcasting (RPB)
A more efficient solution than building a single spanning tree for the
entire internetwork would be to build a spanning tree for each potential
source [subnetwork]. These spanning trees would result in source-based
delivery trees emanating from the subnetworks directly connected to the
source stations. Since there are many potential sources for a group, a source stations. Since there are many potential sources for a group, a
different delivery tree is constructed rooted at each active source. different delivery tree is constructed rooted at each active source.
6.2.1.1 Reverse Path Broadcasting: Operation 6.2.1.1 Reverse Path Broadcasting: Operation
The fundamental algorithm to construct these source-based trees is The fundamental algorithm to construct these source-based trees is
referred to as Reverse Path Broadcasting (RPB). The RPB algorithm is referred to as Reverse Path Broadcasting (RPB). The RPB algorithm is
actually quite simple. For each source, if a packet arrives on a link actually quite simple. For each source, if a packet arrives on a link
that the local router believes to be on the shortest path back toward that the local router believes to be on the shortest path back toward
the packet's source, then the router forwards the packet on all the packet's source, then the router forwards the packet on all
skipping to change at page 19, line 34 skipping to change at page 19, line 47
duplication. If the local router making the forwarding decision can duplication. If the local router making the forwarding decision can
determine whether a neighboring router on a child link is "downstream," determine whether a neighboring router on a child link is "downstream,"
then the packet is multicast toward the neighbor. (A "downstream" then the packet is multicast toward the neighbor. (A "downstream"
neighbor is a neighboring router which considers the local router to be neighbor is a neighboring router which considers the local router to be
on the shortest path back toward a given source.) Otherwise, the packet on the shortest path back toward a given source.) Otherwise, the packet
is not forwarded on the potential child link since the local router is not forwarded on the potential child link since the local router
knows that the neighboring router will just discard the packet (since it knows that the neighboring router will just discard the packet (since it
will arrive on a non-parent link for the source, relative to that will arrive on a non-parent link for the source, relative to that
downstream router). downstream router).
The information to make this "downstream" decision is relatively easy to
derive from a link-state routing protocol since each router maintains a
topological database for the entire routing domain. If a distance-
vector routing protocol is employed, a neighbor can either advertise its
previous hop for the source as part of its routing update messages or
"poison reverse" the route toward a source if it is not on the delivery
tree for that source. Either of these techniques allows an upstream
router to determine if a downstream neighboring router is on an active
branch of the distribution tree for a certain source.
======================================================================== ========================================================================
Source Source
| ^ | ^
| : shortest path back to the | : shortest path back to the
| : source for THIS router | : source for THIS router
| : | :
"parent link" "parent link"
_ _
______|!2|_____ ______|!2|_____
| | | |
--"child -|!1| |!3| - "child -- --"child -|!1| |!3| - "child --
link" | ROUTER | link" link" | ROUTER | link"
|_______________| |_______________|
Figure 7: Reverse Path Broadcasting - Forwarding Algorithm Figure 7: Reverse Path Broadcasting - Forwarding Algorithm
======================================================================== ========================================================================
The information to make this "downstream" decision is relatively easy to Note that the source station (S) is attached to a leaf subnetwork
derive from a link-state routing protocol since each router maintains a directly connected to Router A. For this example, we will look at the
topological database for the entire routing domain. If a distance- RPB algorithm from Router B's perspective. Router B receives the
vector routing protocol is employed, a neighbor can either advertise its multicast packet from Router A on link 1. Since Router B considers link
previous hop for the source as part of its routing update messages or 1 to be the parent link for the (source, group) pair, it forwards the
"poison reverse" the route toward a source if it is not on the packet on link 4, link 5, and the local leaf subnetworks if they contain
distribution tree for that source. Either of these techniques allows an group members. Router B does not forward the packet on link 3 because
upstream router to determine if a downstream neighboring router is on an it knows from routing protocol exchanges that Router C considers link 2
active branch of the delivery tree for a certain source. as its parent link for the source. Router B knows that if it were to
forward the packet on link 3, it would be discarded by Router C since
the packet would not be arriving on Router C's parent link for this
source.
Please refer to Figure 8 for a discussion describing the basic operation Figure 8 illustrates the preceding discussion of the enhanced RPB
of the enhanced RPB algorithm. algorithm's basic operation.
6.2.1.2 RPB: Benefits and Limitations
The key benefit to reverse path broadcasting is that it is reasonably
efficient and easy to implement. It does not require that the router
know about the entire spanning tree, nor does it require a special
mechanism to stop the forwarding process (as flooding does). In
addition, it guarantees efficient delivery since multicast packets
always follow the "shortest" path from the source station to the
destination group. Finally, the packets are distributed over multiple
links, resulting in better network utilization since a different tree is
computed for each source.
One of the major limitations of the RPB algorithm is that it does not
take into account multicast group membership when building the delivery
tree for a source. As a result, extraneous datagrams may be forwarded
onto subnetworks that have no group members.
======================================================================== ========================================================================
Source Station------>O Source Station------>O
A # A #
+|+ +|+
+ | + + | +
+ O + + O +
+ + + +
1 2 1 2
skipping to change at page 21, line 5 skipping to change at page 21, line 44
LEGEND LEGEND
O Leaf O Leaf
+ + Shortest-path + + Shortest-path
- - Branch - - Branch
# Router # Router
Figure 8: Reverse Path Broadcasting - Example Figure 8: Reverse Path Broadcasting - Example
======================================================================== ========================================================================
Note that the source station (S) is attached to a leaf subnetwork
directly connected to Router A. For this example, we will look at the
RPB algorithm from Router B's perspective. Router B receives the
multicast packet from Router A on link 1. Since Router B considers link
1 to be the parent link for the (source, group) pair, it forwards the
packet on link 4, link 5, and the local leaf subnetworks if they contain
group members. Router B does not forward the packet on link 3 because
it knows from routing protocol exchanges that Router C considers link 2
as its parent link for the source. Router B knows that if it were to
forward the packet on link 3, it would be discarded by Router C since
the packet would not be arriving on Router C's parent link for this
source.
6.2.1.2 RPB: Benefits and Limitations
The key benefit to reverse path broadcasting is that it is reasonably
efficient and easy to implement. It does not require that the router
know about the entire spanning tree, nor does it require a special
mechanism to stop the forwarding process (as flooding does). In
addition, it guarantees efficient delivery since multicast packets
always follow the "shortest" path from the source station to the
destination group. Finally, the packets are distributed over multiple
links, resulting in better network utilization since a different tree is
computed for each source.
One of the major limitations of the RPB algorithm is that it does not
take into account multicast group membership when building the delivery
tree for a source. As a result, datagrams may be unnecessarily
forwarded onto subnetworks that have no members in a destination group.
6.2.2 Truncated Reverse Path Broadcasting (TRPB) 6.2.2 Truncated Reverse Path Broadcasting (TRPB)
Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the
limitations of Reverse Path Broadcasting. With information provided by limitations of Reverse Path Broadcasting. With information provided by
IGMP, multicast routers determine the group memberships on each leaf IGMP, multicast routers determine the group memberships on each leaf
subnetwork and avoid forwarding datagrams onto a leaf subnetwork if it subnetwork and avoid forwarding datagrams onto a leaf subnetwork if it
does not contain at least one member of a given destination group. Thus, does not contain at least one member of a given destination group. Thus,
the delivery tree is "truncated" by the router if a leaf subnetwork has the delivery tree is "truncated" by the router if a leaf subnetwork has
no group members. no group members.
skipping to change at page 22, line 31 skipping to change at page 22, line 40
/ ^ |______|4|_____| ^ \ / ^ |______|4|_____| ^ \
G1 ^ .^--- ^ G3 G1 ^ .^--- ^ G3
^ .^ | ^ ^ .^ | ^
^ .^ "child link" ^ ^ .^ "child link" ^
Forward | Truncate Forward | Truncate
Figure 9: Truncated Reverse Path Broadcasting - (TRPB) Figure 9: Truncated Reverse Path Broadcasting - (TRPB)
====================================================================== ======================================================================
TRPB removes some limitations of RPB but it solves only part of the TRPB removes some limitations of RPB but it solves only part of the
problem. It eliminates unnecessary traffic on leaf subnetworks but it problem. It eliminates extraneous traffic on leaf subnetworks, but it
does not consider group memberships when building the branches of the does not consider group memberships when building the branches of the
delivery tree. delivery tree.
6.2.3 Reverse Path Multicasting (RPM) 6.2.3 Reverse Path Multicasting (RPM)
Reverse Path Multicasting (RPM) is an enhancement to Reverse Path Reverse Path Multicasting (RPM) is an enhancement to Reverse Path
Broadcasting and Truncated Reverse Path Broadcasting. Broadcasting and Truncated Reverse Path Broadcasting.
RPM creates a delivery tree that spans only: RPM creates a delivery tree that spans only 1) subnetworks with group
members, and 2) routers and subnetworks along the shortest path to those
o Subnetworks with group members, and subnetworks. RPM allows the source-based "shortest-path" tree to be
"pruned" so that datagrams are only forwarded along branches that lead
o Routers and subnetworks along the shortest to active members of the destination group.
path to subnetworks with group members.
RPM allows the source-based "shortest-path" tree to be pruned so that
datagrams are only forwarded along branches that lead to active members
of the destination group.
6.2.3.1 Operation 6.2.3.1 Operation
When a multicast router receives a packet for a (source, group) pair, When a multicast router receives a packet for a (source, group) pair,
the first packet is forwarded following the TRPB algorithm across all the first packet is forwarded following the TRPB algorithm across all
routers in the internetwork. Routers on the edge of the network (which routers in the internetwork. Routers on the edge of the network (which
have only leaf subnetworks) are called leaf routers. The TRPB algorithm have only leaf subnetworks) are called leaf routers. The TRPB algorithm
guarantees that each leaf router will receive at least the first guarantees that each leaf router will receive at least the first
multicast packet. If there is a group member on one of its leaf multicast packet. If there is a group member on one of its leaf
subnetworks, a leaf router forwards the packet based on this group subnetworks, a leaf router forwards the packet based on this group
membership information. membership information.
======================================================================== ========================================================================
Source Source
skipping to change at page 23, line 17 skipping to change at page 23, line 21
guarantees that each leaf router will receive at least the first guarantees that each leaf router will receive at least the first
multicast packet. If there is a group member on one of its leaf multicast packet. If there is a group member on one of its leaf
subnetworks, a leaf router forwards the packet based on this group subnetworks, a leaf router forwards the packet based on this group
membership information. membership information.
======================================================================== ========================================================================
Source Source
| : | :
| : (Source, G) | : (Source, G)
| :
| v | v
| |
|
o-#-G o-#-G
|********** , |**********
^ | * ^ | *
, | * , | *
^ | * o ^ | * o
, | * / , | * /
o-#-o #*********** o-#-o ,#***********
^ |\ ^ |\ * ^ |\ ^ |\ *
^ | o ^ | G * , | o , | G *
, | , | *
^ | ^ | * ^ | ^ | *
, | , | * , | , | *
^,| ^,| *
# # # # # #
/|\ /|\ /|\ /|\ /|\ /|\
o o o o o o G o G o o o o o o G o G
LEGEND LEGEND
# Router # Router
o Leaf without group member o Leaf without group member
G Leaf with group member G Leaf with group member
*** Active Branch *** Active Branch
--- Pruned Branch --- Pruned Branch
skipping to change at page 23, line 50 skipping to change at page 24, line 4
*** Active Branch *** Active Branch
--- Pruned Branch --- Pruned Branch
,>, Prune Message (direction of flow --> ,>, Prune Message (direction of flow -->
Figure 10: Reverse Path Multicasting (RPM) Figure 10: Reverse Path Multicasting (RPM)
======================================================================== ========================================================================
If none of the subnetworks connected to the leaf router contain group If none of the subnetworks connected to the leaf router contain group
members, the leaf router may transmit a "prune" message on its parent members, the leaf router may transmit a "prune" message on its parent
link, informing the upstream router that it should not forward packets link, informing the upstream router that it should not forward packets
for this particular (source, group) pair on the child interface on which for this particular (source, group) pair on the child interface on which
it received the prune message. Prune messages are sent just one hop it received the prune message. Prune messages are sent just one hop
back toward the source. back toward the source.
An upstream router receiving a prune message is required to store the An upstream router receiving a prune message is required to store the
prune information in memory. If the upstream router has no recipients prune information in memory. If the upstream router has no recipients
on local leaf subnetworks and has received prune messages from each on local leaf subnetworks and has received prune messages from each
downstream neighbor on each of the child interfaces for this (source, downstream neighbor on each of the child interfaces for this (source,
group) pair, then the upstream router does not need to receive group) pair, then the upstream router does not need to receive any more
additional packets for the (source, group) pair. This implies that the packets for this (source, group) pair. Therefore, the upstream router
upstream router can also generate a prune message of its own, one hop can also generate a prune message of its own, one hop further back
further back toward the source. This cascade of prune messages results toward the source. This cascade of prune messages results in an active
in an active multicast delivery tree, consisting exclusively of "live" multicast delivery tree, consisting exclusively of "live" branches
branches (i.e., branches that lead to active receivers). (i.e., branches that lead to active receivers).
Since both the group membership and internetwork topology can change Since both the group membership and internetwork topology can change
dynamically, the pruned state of the multicast delivery tree must be dynamically, the pruned state of the multicast delivery tree must be
refreshed periodically. At regular intervals, the prune information refreshed periodically. At regular intervals, the prune information
expires from the memory of all routers and the next packet for the expires from the memory of all routers and the next packet for the
(source, group) pair is forwarded toward all downstream routers. This (source, group) pair is forwarded toward all downstream routers. This
allows "stale state" (prune state for groups that are no longer active) allows "stale state" (prune state for groups that are no longer active)
to be reclaimed by the multicast routers. to be reclaimed by the multicast routers.
6.2.3.2 Limitations 6.2.3.2 Limitations
Despite the improvements offered by the RPM algorithm, there are still Despite the improvements offered by the RPM algorithm, there are still
several scaling issues that need to be addressed when attempting to several scaling issues that need to be addressed when attempting to
develop an Internet-wide delivery service. The first limitation is that develop an Internet-wide delivery service. The first limitation is that
multicast packets must be periodically flooded across every router in multicast packets must be periodically broadcast across every router in
the internetwork, onto every leaf subnetwork. This flooding is wasteful the internetwork, onto every leaf subnetwork. This "broadcasting" is
of bandwidth (until the updated prune state is constructed). wasteful of bandwidth (until the updated prune state is constructed).
This "flood and prune" paradigm is very powerful, but it wastes This "broadcast and prune" paradigm is very powerful, but it wastes
bandwidth and does not scale well, especially if there are receivers at bandwidth and does not scale well, especially if there are receivers at
the edge of the delivery tree which are connected via low-speed the edge of the delivery tree which are connected via low-speed
technologies (e.g., ISDN or modem). Also, note that every router technologies (e.g., ISDN or modem). Also, note that every router
participating in the RPM algorithm must either have a forwarding table participating in the RPM algorithm must either have a forwarding table
entry for a (source, group) pair, or have prune state information for entry for a (source, group) pair, or have prune state information for
that (source, group) pair. that (source, group) pair.
It is clearly wasteful (especially as the number of active sources and It is clearly wasteful (especially as the number of active sources and
groups increase) to place such a burden on routers that are not on every groups increase) to place such a burden on routers that are not on every
(or perhaps any) active delivery tree. Shared tree techniques are an (or perhaps any) active delivery tree. Shared tree techniques are an
skipping to change at page 26, line 33 skipping to change at page 26, line 36
6.3.3 Limitations 6.3.3 Limitations
Despite these benefits, there are still several limitations to protocols Despite these benefits, there are still several limitations to protocols
that are based on a shared tree algorithm. Shared trees may result in that are based on a shared tree algorithm. Shared trees may result in
traffic concentration and bottlenecks near core routers since traffic traffic concentration and bottlenecks near core routers since traffic
from all sources traverses the same set of links as it approaches the from all sources traverses the same set of links as it approaches the
core. In addition, a single shared delivery tree may create suboptimal core. In addition, a single shared delivery tree may create suboptimal
routes (a shortest path between the source and the shared tree, a routes (a shortest path between the source and the shared tree, a
suboptimal path across the shared tree, a shortest path between the suboptimal path across the shared tree, a shortest path between the
egress core router and the receiver's directly attached router) group's core router and the receiver's directly-attached router),
resulting in increased delay which may be a critical issue for some resulting in increased delay which may be a critical issue for some
multimedia applications. (Simulations indicate that latency over a multimedia applications. (Simulations indicate that latency over a
shared tree may be approximately 10% larger than source-based trees in shared tree may be approximately 10% larger than source-based trees in
many cases, but by the same token, this may be negligible for many many cases, but by the same token, this may be negligible for many
applications.) Finally, expanding-ring searches are not supported applications.)
inside shared-tree domains.
Finally, expanding-ring searches will probably not work as expected
inside shared-tree domains. The searching host's increasing TTL will
cause the packets to work their way up the shared tree, and while a
desired resource may still be found, it may not be as topologically
close as one would expect.
7. "DENSE MODE" ROUTING PROTOCOLS 7. "DENSE MODE" ROUTING PROTOCOLS
Certain multicast routing protocols are designed to work well in Certain multicast routing protocols are designed to work well in
environments that have plentiful bandwidth and where it is reasonable environments that have plentiful bandwidth and where it is reasonable
to assume that receivers are rather densely distributed. In such to assume that receivers are rather densely distributed. In such
scenarios, it is very reasonable to use periodic flooding, or other scenarios, it is very reasonable to use periodic flooding, or other
bandwidth-intensive techniques that would not necessarily be very bandwidth-intensive techniques that would not necessarily be very
scalable over a wide-area network. In section 8, we will examine scalable over a wide-area network. In section 8, we will examine
different protocols that are specifically geared toward efficient WAN different protocols that are specifically geared toward efficient WAN
operation, especially for groups that have widely dispersed (i.e., operation, especially for groups that have widely dispersed (i.e.,
sparse) membership. sparse) membership.
These routing protocols include: So-called "dense"-mode routing protocols include:
o Distance Vector Multicast Routing Protocol (DVMRP), o the Distance Vector Multicast Routing Protocol (DVMRP),
o Multicast Extensions to Open Shortest Path First (MOSPF), o Multicast Extensions to Open Shortest Path First (MOSPF), and
o Protocol Independent Multicast - Dense Mode (PIM-DM). o Protocol Independent Multicast - Dense Mode (PIM-DM).
These protocols' underlying designs assume that the amount of protocol These protocols' underlying designs assume that the amount of protocol
overhead (in terms of the amount of state that must be maintained by overhead (in terms of the amount of state that must be maintained by
each router, the number of router CPU cycles required, and the amount of each router, the number of router CPU cycles required, and the amount of
bandwidth consumed by protocol operation) is appropriate since receivers bandwidth consumed by protocol operation) is appropriate since receivers
densely populate the area of operation. densely populate the area of operation.
7.1. Distance Vector Multicast Routing Protocol (DVMRP) 7.1. Distance Vector Multicast Routing Protocol (DVMRP)
skipping to change at page 27, line 37 skipping to change at page 27, line 43
DVMRP was first defined in RFC-1075. The original specification was DVMRP was first defined in RFC-1075. The original specification was
derived from the Routing Information Protocol (RIP) and employed the derived from the Routing Information Protocol (RIP) and employed the
Truncated Reverse Path Broadcasting (TRPB) technique. The major Truncated Reverse Path Broadcasting (TRPB) technique. The major
difference between RIP and DVMRP is that RIP calculates the next-hop difference between RIP and DVMRP is that RIP calculates the next-hop
toward a destination, while DVMRP computes the previous-hop back toward toward a destination, while DVMRP computes the previous-hop back toward
a source. Since mrouted 3.0, DVMRP has employed the Reverse Path a source. Since mrouted 3.0, DVMRP has employed the Reverse Path
Multicasting (RPM) algorithm. Thus, the latest implementations of DVMRP Multicasting (RPM) algorithm. Thus, the latest implementations of DVMRP
are quite different from the original RFC specification in many regards. are quite different from the original RFC specification in many regards.
There is an active effort within the IETF Inter-Domain Multicast Routing There is an active effort within the IETF Inter-Domain Multicast Routing
(IDMR) working group to specify DVMRP version 3 in a standard form. (IDMR) working group to specify DVMRP version 3 in a well-documented
form.
The current DVMRP v3 Internet-Draft is: The current DVMRP v3 Internet-Draft, representing a snapshot of a work-
in-progress, is:
<draft-ietf-idmr-dvmrp-v3-04.txt>, or <draft-ietf-idmr-dvmrp-v3-04.txt>, or <draft-ietf-idmr-dvmrp-v3-04.ps>
<draft-ietf-idmr-dvmrp-v3-04.ps>
7.1.1 Physical and Tunnel Interfaces 7.1.1 Physical and Tunnel Interfaces
The ports of a DVMRP router may be either a physical interface to a The interfaces of a DVMRP router may be either a physical interface to
directly-attached subnetwork or a tunnel interface to another multicast- a directly-attached subnetwork or a tunnel interface (a.k.a virtual
capable island. All interfaces are configured with a metric specifying interface) to another multicast-capable region. All interfaces are
cost for the given port, and a TTL threshold that limits the scope of a
multicast transmission. In addition, each tunnel interface must be configured with a metric specifying their individual costs, and a TTL
explicitly configured with two additional parameters: The IP address of threshold which any multicast packets must exceed in order to cross this
the local router's tunnel interface and the IP address of the remote interface. In addition, each tunnel interface must be explicitly
router's interface. configured with two additional parameters: The IP address of the local
router's tunnel interface and the IP address of the remote router's
interface address.
======================================================================== ========================================================================
TTL Scope TTL Scope
Threshold Threshold
________________________________________________________________________ _____________________________________________________________________
0 Restricted to the same host 0 Restricted to the same host
1 Restricted to the same subnetwork 1 Restricted to the same subnetwork
15 Restricted to the same site 15 Restricted to the same site
63 Restricted to the same region 63 Restricted to the same region
127 Worldwide 127 Worldwide
191 Worldwide; limited bandwidth 191 Worldwide; limited bandwidth
255 Unrestricted in scope 255 Unrestricted in scope
Table 1: TTL Scope Control Values Table 1: TTL Scope Control Values
======================================================================== ========================================================================
skipping to change at page 28, line 43 skipping to change at page 28, line 50
of administrative regions. In light of these issues, "administrative" of administrative regions. In light of these issues, "administrative"
scoping was created in 1994, to provide a way to do scoping based on scoping was created in 1994, to provide a way to do scoping based on
multicast address. Certain addresses would be usable within a given multicast address. Certain addresses would be usable within a given
administrative scope (e.g., a corporate internetwork) but would not be administrative scope (e.g., a corporate internetwork) but would not be
forwarded onto the global MBone. This allows for privacy, and address forwarded onto the global MBone. This allows for privacy, and address
reuse within the class D address space. The range from 239.0.0.0 to reuse within the class D address space. The range from 239.0.0.0 to
239.255.255.255 has been reserved for administrative scoping. While 239.255.255.255 has been reserved for administrative scoping. While
administrative scoping has been in limited use since 1994 or so, it has administrative scoping has been in limited use since 1994 or so, it has
yet to be widely deployed. The IETF MBoneD working group is working on yet to be widely deployed. The IETF MBoneD working group is working on
the deployment of administrative scoping. For additional information, the deployment of administrative scoping. For additional information,
please see <draft-ietf-mboned-admin-ip-space-01.txt> or its successor, please see <draft-ietf-mboned-admin-ip-space-03.txt> or the successor to
entitled "Administratively Scoped IP Multicast." this work-in-progress, entitled "Administratively Scoped IP Multicast."
7.1.2 Basic Operation 7.1.2 Basic Operation
DVMRP implements the Reverse Path Multicasting (RPM) algorithm. DVMRP implements the Reverse Path Multicasting (RPM) algorithm.
According to RPM, the first datagram for any (source, group) pair is According to RPM, the first datagram for any (source, group) pair is
forwarded across the entire internetwork (providing the packet's TTL and forwarded across the entire internetwork (providing the packet's TTL and
router interface thresholds permit this). Upon receiving this traffic, router interface thresholds permit this). Upon receiving this traffic,
leaf routers may transmit prune messages back toward the source if there leaf routers may transmit prune messages back toward the source if there
are no group members on their directly-attached leaf subnetworks. The are no group members on their directly-attached leaf subnetworks. The
prune messages remove all branches that do not lead to group members prune messages remove all branches that do not lead to group members
from the tree, leaving a source-based shortest path tree. from the tree, leaving a source-based shortest path tree.
After a period of time, the prune state for each (source, group) pair After a period of time, the prune state for each (source, group) pair
expires to reclaim stale prune state (from groups that are no longer in expires to reclaim router memory that is being used to store prune
use). If those groups are actually still in use, a subsequent datagram state pertaining to groups that are no longer active. If those groups
for the (source, group) pair will be flooded across all downstream happen to still in use, a subsequent datagram for the (source, group)
routers. This flooding will result in a new set of prune messages, pair will be broadcast across all downstream routers. This will result
serving to regenerate the source-based shortest-path tree for this in a new set of prune messages, serving to regenerate the (source,
(source, group) pair. In current implementations of RPM (notably group) pair's source-based shortest-path tree. Current implementations
DVMRP), prune messages are not reliably transmitted, so the prune of RPM (notably DVMRP) do not transmit prune messages reliably, so the
lifetime must be kept short to compensate for lost prune messages. prune lifetime must be kept short to compensate for potential lost
prune messages. (The internet-draft of DVMRP 3.0 incorporates a
mechanism to make prunes reliable.)
DVMRP also implements a mechanism to quickly "graft" back a previously DVMRP also implements a mechanism to quickly "graft" back a previously
pruned branch of a group's delivery tree. If a router that had sent a pruned branch of a group's delivery tree. If a router that had sent a
prune message for a (source, group) pair discovers new group members on prune message for a (source, group) pair discovers new group members on
a leaf network, it sends a graft message to the previous-hop router for a leaf network, it sends a graft message to the previous-hop router for
this source. When an upstream router receives a graft message, it this source. When an upstream router receives a graft message, it
cancels out the previously-received prune message. Graft messages cancels out the previously-received prune message. Graft messages
cascade (reliably) hop-by-hop back toward the source until they reach cascade (reliably) hop-by-hop back toward the source until they reach
the nearest "live" branch point on the delivery tree. In this way, the nearest "live" branch point on the delivery tree. In this way,
previously-pruned branches are quickly restored to a given delivery previously-pruned branches are quickly restored to a given delivery
tree. tree.
7.1.3 DVMRP Router Functions 7.1.3 DVMRP Router Functions
In Figure 13, Router C is downstream and may potentially receive In Figure 12, Router C is downstream and may potentially receive
datagrams from the source subnetwork from Router A or Router B. If datagrams from the source subnetwork from Router A or Router B. If
Router A's metric to the source subnetwork is less than Router B's Router A's metric to the source subnetwork is less than Router B's
metric, then Router A is dominant over Router B for this source. metric, then Router A is dominant over Router B for this source.
This means that Router A will forward any traffic from the source This means that Router A will forward any traffic from the source
subnetwork and Router B will discard traffic received from that source. subnetwork and Router B will discard traffic received from that source.
However, if Router A's metric is equal to Router B's metric, then the However, if Router A's metric is equal to Router B's metric, then the
router with the lower IP address on its downstream interface (child router with the lower IP address on its downstream interface (child
link) becomes the Dominant Router for this source. Note that on a link) becomes the Dominant Router for this source. Note that on a
subnetwork with multiple routers forwarding to groups with multiple subnetwork with multiple routers forwarding to groups with multiple
sources, different routers may be dominant for each source. sources, different routers may be dominant for each source.
7.1.4 DVMRP Routing Table 7.1.4 DVMRP Routing Table
The DVMRP process periodically exchanges routing table updates with its The DVMRP process periodically exchanges routing table updates with its
DVMRP neighbors. These updates are logically independent of those DVMRP neighbors. These updates are independent of those generated by
generated by any unicast Interior Gateway Protocol. any unicast Interior Gateway Protocol.
Since the DVMRP was developed to route multicast and not unicast Since the DVMRP was developed to route multicast and not unicast
traffic, a router will probably run multiple routing processes in traffic, a router will probably run multiple routing processes in
practice: One to support the forwarding of unicast traffic and another practice: One to support the forwarding of unicast traffic and another
to support the forwarding of multicast traffic. (This can be convenient: to support the forwarding of multicast traffic. (This can be convenient:
A router can be configured to only route multicast IP, with no unicast A router can be configured to only route multicast IP, with no unicast
======================================================================== ========================================================================
To To
skipping to change at page 30, line 38 skipping to change at page 30, line 50
| |
child link child link
| |
Figure 12. DVMRP Dominant Router in a Redundant Topology Figure 12. DVMRP Dominant Router in a Redundant Topology
======================================================================== ========================================================================
IP routing. This may be a useful capability in firewalled IP routing. This may be a useful capability in firewalled
environments.) environments.)
Again, consider Figure 12: There are two types of routers in this Again, consider Figure 12: There are two types of routers in figure 12:
figure: dominant and subordinate; assume in this example that Router B Dominant and Subordinate. Assume in this example that Router B is
is dominant, Router A is subordinate, and Router C is part of the dominant, Router A is subordinate, and Router C is part of the
downstream distribution tree. In general, which routers are dominant downstream distribution tree. In general, which routers are dominant
or subordinate may be different for each source! A subordinate router or subordinate may be different for each source! A subordinate router
is one that is NOT on the shortest path tree back toward a source. The is one that is NOT on the shortest path tree back toward a source. The
dominant router can tell this because the subordinate router will dominant router can tell this because the subordinate router will
'poison-reverse' the route for this source in its routing updates which 'poison-reverse' the route for this source in its routing updates which
are sent on the common LAN (i.e., Router A sets the metric for this are sent on the common LAN (i.e., Router A sets the metric for this
source to 'infinity'). The dominant router keeps track of subordinate source to 'infinity'). The dominant router keeps track of subordinate
routers on a per-source basis...it never needs or expects to receive a routers on a per-source basis...it never needs or expects to receive a
prune message from a subordinate router. Only routers that are truly on prune message from a subordinate router. Only routers that are truly on
the downstream distribution tree will ever need to send prunes to the the downstream distribution tree will ever need to send prunes to the
dominant router. If a dominant router on a LAN has received either a dominant router. If a dominant router on a LAN has received either a
skipping to change at page 31, line 4 skipping to change at page 31, line 17
dominant router can tell this because the subordinate router will dominant router can tell this because the subordinate router will
'poison-reverse' the route for this source in its routing updates which 'poison-reverse' the route for this source in its routing updates which
are sent on the common LAN (i.e., Router A sets the metric for this are sent on the common LAN (i.e., Router A sets the metric for this
source to 'infinity'). The dominant router keeps track of subordinate source to 'infinity'). The dominant router keeps track of subordinate
routers on a per-source basis...it never needs or expects to receive a routers on a per-source basis...it never needs or expects to receive a
prune message from a subordinate router. Only routers that are truly on prune message from a subordinate router. Only routers that are truly on
the downstream distribution tree will ever need to send prunes to the the downstream distribution tree will ever need to send prunes to the
dominant router. If a dominant router on a LAN has received either a dominant router. If a dominant router on a LAN has received either a
poison-reversed route for a source, or prunes for all groups emanating poison-reversed route for a source, or prunes for all groups emanating
from that source subnetwork, then it may itself send a prune upstream from that source subnetwork, then it may itself send a prune upstream
toward the source (assuming also that IGMP has told it that there are no toward the source (assuming also that IGMP has told it that there are no
local receivers for any group from this source). local receivers for any group from this source).
A sample routing table for a DVMRP router is shown in Figure 13. Unlike
======================================================================== ========================================================================
Source Subnet From Metric Status TTL Source Subnet From Metric Status TTL
Prefix Mask Gateway Prefix Mask Gateway
128.1.0.0 255.255.0.0 128.7.5.2 3 Up 200 128.1.0.0 255.255.0.0 128.7.5.2 3 Up 200
128.2.0.0 255.255.0.0 128.7.5.2 5 Up 150 128.2.0.0 255.255.0.0 128.7.5.2 5 Up 150
128.3.0.0 255.255.0.0 128.6.3.1 2 Up 150 128.3.0.0 255.255.0.0 128.6.3.1 2 Up 150
128.3.0.0 255.255.0.0 128.6.3.1 4 Up 200 128.3.0.0 255.255.0.0 128.6.3.1 4 Up 200
Figure 13: DVMRP Routing Table Figure 13: DVMRP Routing Table
======================================================================== ========================================================================
A sample routing table for a DVMRP router is shown in Figure 13. Unlike
the table that would be created by a unicast routing protocol such as the table that would be created by a unicast routing protocol such as
the RIP, OSPF, or the BGP, the DVMRP routing table contains Source the RIP, OSPF, or the BGP, the DVMRP routing table contains Source
Prefixes and From-Gateways instead of Destination Prefixes and Next-Hop Prefixes and From-Gateways instead of Destination Prefixes and Next-Hop
Gateways. Gateways.
The routing table represents the shortest path (source-based) spanning The routing table represents the shortest path (source-based) spanning
tree to every possible source prefix in the internetwork--the Reverse tree to every possible source prefix in the internetwork--the Reverse
Path Broadcasting (RPB) tree. The DVMRP routing table does not Path Broadcasting (RPB) tree. The DVMRP routing table does not
represent group membership or received prune messages. represent group membership or received prune messages.
skipping to change at page 33, line 5 skipping to change at page 33, line 11
prune message has been sent to the upstream prune message has been sent to the upstream
router (the From-Gateway for this Source Prefix router (the From-Gateway for this Source Prefix
in the DVMRP routing table). in the DVMRP routing table).
OutIntf(s) The child interfaces over which multicast OutIntf(s) The child interfaces over which multicast
datagrams for this (source, group) pair are datagrams for this (source, group) pair are
forwarded. A 'p' in this column indicates forwarded. A 'p' in this column indicates
that the router has received a prune message(s) that the router has received a prune message(s)
from a (all) downstream router(s) on this port. from a (all) downstream router(s) on this port.
7.1.6 DVMRP Tree Building and Forwarding Summary
As we have seen, DVMRP enables packets to be forwarded away from a
multicast source along the Reverse Path Multicasting (RPM) tree. The
general name for this technique is Reverse Path Forwarding, and it is
used in some other multicast routing protocols, as we shall see later.
Reverse Path Forwarding was described in Steve Deering's Ph.D. thesis,
and was a refinement of early work on Reverse Path Broadcasting done by
Dalal and Metcalfe in 1978. In our discussion of RPB, we saw that it
was wasteful in its excessive usage of network resources, because all
nodes received a (single) copy of each packet. No effort was made in
RPB to prune the tree to only include branches leading to active
receivers. After all, RPB was a broadcast technique, not a multicast
technique.
Truncated RPB added a small optimization so that IGMP was used to tell
if there were listeners on the LANs at the edge of the RPB tree; if
there were no listeners, the packets were stopped at the leaf routers
and not forwarded onto the edge LANs. Despite the savings of LAN
bandwidth, TRPB still sends a copy of each packet to every router in the
topology.
Reverse Path Multicasting, used in DVMRP, takes the group membership
information derived from IGMP and sends control messages upstream (i.e.,
toward the source subnetwork) in order to prune the distribution tree.
This technique trades off some memory in the participating routers (to
store the prune information) in order to gain back the wasted bandwidth
on branches that do not lead to interested receivers.
In DVMRP, the RPM distribution tree is created on demand to describe the
forwarding table for a given source sending to a group. As described in
section 7.1.5, the forwarding table indicates the expected inbound
interface for packets from this source, and the expected outbound
interface(s) for distribution to the rest of the group. Forwarding table
entries are created when packets to a "new" (source, group) pair arrive
at a DVMRP router. As each packet is received, its source and group are
matched against the appropriate row of the forwarding table. If the
packet was received on the correct inbound interface, it is forwarded
downstream on the appropriate outbound interfaces for this group.
DVMRP's tree-building protocol is often called "broadcast-and-prune"
because the first time a packet for a new (source, group) pair arrives,
it is transmitted towards all routers in the internetwork. Then the
edge routers initiate prunes. Unnecessary delivery branches are pruned
within the round-trip time from the top of a branch to the furthest leaf
router, typically on the order tens of milliseconds or less, thus, the
distribution tree for this new (source, group) pair is quickly trimmed
to serve only active receivers.
The check DVMRP routers do when a packet arrives at the router is called
the "reverse-path check." The first thing a router must do upon receipt
of a multicast packet is determine that it arrived on the "correct"
inbound interface. For packets matching (source, group) pairs that this
router has already seen, there will already be a forwarding table entry
indicating the expected incoming interface. For "new" packets, DVMRP's
routing table is used to compare the actual receiving interface with the
one that is considered by the router to be on the shortest path back to
the source. If the reverse-path check succeeds, the packet is forwarded
only on those interfaces that the router considers "child" interfaces
(with respect to the source of the packet); there may be some interfaces
that, for a given source, are neither child nor incoming interfaces.
Child interfaces attach to subnetworks which use this router as their
previous-hop on the shortest path toward the source (router addresses on
this subnetwork are used as a tie-breaker when there is more than one
such router).
Once the incoming interface and valid downstream child interfaces for
this (source, group) are determined, a forwarding table entry is created
to enable quick forwarding of future packets for this (source, group)
pair. A multicast packet must never be forwarded back toward its source:
This would result in a forwarding loop.
7.2. Multicast Extensions to OSPF (MOSPF) 7.2. Multicast Extensions to OSPF (MOSPF)
Version 2 of the Open Shortest Path First (OSPF) routing protocol is Version 2 of the Open Shortest Path First (OSPF) routing protocol is
defined in RFC-1583. OSPF is an Interior Gateway Protocol (IGP) that defined in RFC-1583. OSPF is an Interior Gateway Protocol (IGP) that
distributes unicast topology information among routers belonging to a distributes unicast topology information among routers belonging to a
single OSPF "Autonomous System." OSPF is based on link-state algorithms single OSPF "Autonomous System." OSPF is based on link-state algorithms
which permit rapid route calculation with a minimum of routing protocol which permit rapid route calculation with a minimum of routing protocol
traffic. In addition to efficient route calculation, OSPF is an open traffic. In addition to efficient route calculation, OSPF is an open
standard that supports hierarchical routing, load balancing, and the standard that supports hierarchical routing, load balancing, and the
import of external routing information. import of external routing information.
skipping to change at page 34, line 23 skipping to change at page 35, line 55
When the initial datagram arrives, the source subnetwork is located in When the initial datagram arrives, the source subnetwork is located in
the MOSPF link state database. The MOSPF link state database is simply the MOSPF link state database. The MOSPF link state database is simply
the standard OSPF link state database with the addition of Group- the standard OSPF link state database with the addition of Group-
Membership LSAs. Based on the Router- and Network-LSAs in the OSPF Membership LSAs. Based on the Router- and Network-LSAs in the OSPF
link state database, a source-based shortest-path tree is constructed link state database, a source-based shortest-path tree is constructed
using Dijkstra's algorithm. After the tree is built, Group-Membership using Dijkstra's algorithm. After the tree is built, Group-Membership
LSAs are used to prune the tree such that the only remaining branches LSAs are used to prune the tree such that the only remaining branches
lead to subnetworks containing members of this group. The output of lead to subnetworks containing members of this group. The output of
these algorithms is a pruned source-based tree rooted at the datagram's these algorithms is a pruned source-based tree rooted at the datagram's
source. source (Figure 15).
To forward multicast datagrams to downstream members of a group, each
router must determine its position in the datagram's shortest path tree.
======================================================================== ========================================================================
S S
| |
|
A # A #
/ \ / \
/ \
1 2 1 2
/ \ / \
B # # C B # # C
/ \ \ / \ \
/ \ \
3 4 5 3 4 5
/ \ \ / \ \
D # # E # F D # # E # F LEGEND
/ \ \
/ \ \ / \ \
6 7 8 6 7 8 # Router
/ \ \ / \ \
G # # H # I G # # H # I
LEGEND
# Router
Figure 15. Shortest Path Tree for a (S, G) pair Figure 15. Shortest Path Tree for a (S, G) pair
======================================================================== ========================================================================
To forward multicast datagrams to downstream members of a group, each
router must determine its position in the datagram's shortest path tree.
Assume that Figure 15 illustrates the shortest path tree for a given Assume that Figure 15 illustrates the shortest path tree for a given
(source, group) pair. Router E's upstream node is Router B and there (source, group) pair. Router E's upstream node is Router B and there
are two downstream interfaces: one connecting to Subnetwork 6 and are two downstream interfaces: one connecting to Subnetwork 6 and
another connecting to Subnetwork 7. another connecting to Subnetwork 7.
Note the following properties of the basic MOSPF routing algorithm: Note the following properties of the basic MOSPF routing algorithm:
o For a given multicast datagram, all routers within an OSPF o For a given multicast datagram, all routers within an OSPF
area calculate the same source-based shortest path delivery area calculate the same source-based shortest path delivery
tree. Tie-breakers have been defined to guarantee that if tree. Tie-breakers have been defined to guarantee that if
skipping to change at page 35, line 45 skipping to change at page 37, line 18
multicast sessions, these effects should be minimal. multicast sessions, these effects should be minimal.
7.2.1.3 Forwarding Cache 7.2.1.3 Forwarding Cache
Each MOSPF router makes its forwarding decision based on the contents of Each MOSPF router makes its forwarding decision based on the contents of
its forwarding cache. Contrary to DVMRP, MOSPF forwarding is not RPF- its forwarding cache. Contrary to DVMRP, MOSPF forwarding is not RPF-
based. The forwarding cache is built from the source-based shortest- based. The forwarding cache is built from the source-based shortest-
path tree for each (source, group) pair, and the router's local group path tree for each (source, group) pair, and the router's local group
database. After the router discovers its position in the shortest path database. After the router discovers its position in the shortest path
tree, a forwarding cache entry is created containing the (source, group) tree, a forwarding cache entry is created containing the (source, group)
pair, its expected upstream interface, and the necessary downstream pair, its expected "upstream" incoming interface, and the necessary
interface(s). The forwarding cache entry is now used to quickly "downstream" outgoing interface(s). The forwarding cache entry is then
forward all subsequent datagrams from this source to this group. If used to quickly forward all subsequent datagrams from this source to
a new source begins sending to a new group, MOSPF must first calculate this group. If a new source begins sending to a new group, MOSPF must
the distribution tree so that it may create a cache entry that can be first calculate the distribution tree so that it may create a cache
used to forward the packet. entry that can be used to forward the packet.
Figure 16 displays the forwarding cache for an example MOSPF router. Figure 16 displays the forwarding cache for an example MOSPF router.
The elements in the display include the following items: The elements in the display include the following items:
Dest. Group A known destination group address to which Dest. Group A known destination group address to which
datagrams are currently being forwarded, or to datagrams are currently being forwarded, or to
which traffic was sent "recently" (i.e., since which traffic was sent "recently" (i.e., since
the last topology or group membership or other the last topology or group membership or other
event which (re-)initialized MOSPF's forwarding event which (re-)initialized MOSPF's forwarding
cache. cache.
skipping to change at page 39, line 40 skipping to change at page 41, line 4
the publication of the MOSPF RFC, a term has been defined for such a the publication of the MOSPF RFC, a term has been defined for such a
router: Multicast Border Router. See section 9 for an overview of the router: Multicast Border Router. See section 9 for an overview of the
MBR concepts. Each inter-AS multicast forwarder is a wildcard multicast MBR concepts. Each inter-AS multicast forwarder is a wildcard multicast
receiver in each of its attached areas. This guarantees that each receiver in each of its attached areas. This guarantees that each
inter-AS multicast forwarder remains on all pruned shortest-path trees inter-AS multicast forwarder remains on all pruned shortest-path trees
and receives all multicast datagrams. and receives all multicast datagrams.
The details of inter-AS forwarding are very similar to inter-area The details of inter-AS forwarding are very similar to inter-area
forwarding. On the "inside" of the OSPF domain, the multicast ASBR forwarding. On the "inside" of the OSPF domain, the multicast ASBR
must conform to all the requirements of intra-area and inter-area must conform to all the requirements of intra-area and inter-area
forwarding. Within the OSPF domain, group members are reached by the forwarding. Within the OSPF domain, group members are reached by the
usual forward path computations, and paths to external sources are usual forward path computations, and paths to external sources are
approximated by a reverse-path source-based tree, with the multicast approximated by a reverse-path source-based tree, with the multicast
ASBR standing in for the actual source. When the source is within the ASBR standing in for the actual source. When the source is within the
OSPF AS, and there are external group members, it falls to the inter- OSPF AS, and there are external group members, it falls to the inter-
AS multicast forwarders, in their role as wildcard receivers, to make AS multicast forwarders, in their role as wildcard receivers, to make
sure that the data gets out of the OSPF domain and sent off in the sure that the data gets out of the OSPF domain and sent off in the
correct direction. correct direction.
7.2.5 MOSPF Tree Building and Forwarding Summary
MOSPF builds a separate tree for each (source, group) combination. The
tree is rooted at the source, and includes each of the group members as
leaves. If the (M)OSPF domain is divided into multiple areas, the tree
is built in pieces, one area at a time. These pieces are then glued
together at the area border routers which connect the various areas.
Sometimes group membership of certain areas (or other ASes) is unknown.
MOSPF forces the tree to extend to these areas and/or ASes by adding
their area border routers/AS boundary routers to the tree as "wildcard
multicast receivers".
Construction of the tree within a given area depends on the location of
the source. If the source is within the area, "forward" costs are used,
and the path within the area follows the "forward path" , that is, the
same route that unicast packets would take from source to group member.
If the source belongs to a different area, or a different AS, "reverse
costs" are used, resulting in reverse path forwarding through the area.
(Reverse path forwarding is less preferred, but is forced because OSPF
Summary LSAs and AS-External LSAs only advertise costs in the reverse
direction).
MOSPF's tree-building process is data-driven. Despite having Group LSAs
present in each area's link state database, no trees are built unless
multicast data is seen from a source to a group (of course, if the Link
State Database indicates no Group LSAs for this group, then no tree is
built since there must be no group members present in this area). So,
despite MOSPF being a "Dense-mode" routing protocol, it is not based on
broadcast-and-prune, but rather it is a so-called "explicit-join"
protocol.
If a packet arrives for a (source, group) pair which has not been seen
by this router before, the router executes the Dijkstra algorithm over
the relevant links in the Link State Database. The Dijkstra algorithm
outputs the "source-rooted" shortest-path tree for this (source, group),
as described earlier. The router examines its position in the tree,
caching the expected inbound interface for packets from this source, and
listing the outbound interface(s) that lead to active downstream
receivers. Subsequent traffic is examined against this cached data.
The traffic from the source must arrive on the correct interface to be
processed further.
7.3 Protocol-Independent Multicast (PIM) 7.3 Protocol-Independent Multicast (PIM)
The Protocol Independent Multicast (PIM) routing protocols have been The Protocol Independent Multicast (PIM) routing protocols have been
developed by the Inter-Domain Multicast Routing (IDMR) working group of developed by the Inter-Domain Multicast Routing (IDMR) working group of
the IETF. The objective of the IDMR working group is to develop one--or the IETF. The objective of the IDMR working group is to develop one--or
possibly more than one--standards-track multicast routing protocol(s) possibly more than one--standards-track multicast routing protocol(s)
that can provide scalable multicast routing across the Internet.
that can provide scaleable multicast routing across the Internet.
PIM is actually two protocols: PIM - Dense Mode (PIM-DM) and PIM - PIM is actually two protocols: PIM - Dense Mode (PIM-DM) and PIM -
Sparse Mode (PIM-SM). In the remainder of this introduction, any Sparse Mode (PIM-SM). In the remainder of this introduction, any
references to "PIM" apply equally well to either of the two protocols... references to "PIM" apply equally well to either of the two protocols...
there is no intention to imply that there is only one PIM protocol. there is no intention to imply that there is only one PIM protocol.
While PIM-DM and PIM-SM share part of their names, and they do have While PIM-DM and PIM-SM share part of their names, and they do have
related control messages, they are actually two completely independent related control messages, they are really two independent protocols.
protocols.
PIM receives its name because it is not dependent on the mechanisms PIM receives its name because it is not dependent on the mechanisms
provided by any particular unicast routing protocol. However, any provided by any particular unicast routing protocol. However, any
implementation supporting PIM requires the presence of a unicast routing implementation supporting PIM requires the presence of a unicast routing
protocol to provide routing table information and to adapt to topology protocol to provide routing table information and to adapt to topology
changes. changes.
PIM makes a clear distinction between a multicast routing protocol that PIM makes a clear distinction between a multicast routing protocol that
is designed for dense environments and one that is designed for sparse is designed for dense environments and one that is designed for sparse
environments. Dense-mode refers to a protocol that is designed to environments. Dense-mode refers to a protocol that is designed to
skipping to change at page 40, line 36 skipping to change at page 42, line 41
that is optimized for environments where group members are distributed that is optimized for environments where group members are distributed
across many regions of the Internet and bandwidth is not necessarily across many regions of the Internet and bandwidth is not necessarily
widely available. It is important to note that sparse-mode does not widely available. It is important to note that sparse-mode does not
imply that the group has a few members, just that they are widely imply that the group has a few members, just that they are widely
dispersed across the Internet. dispersed across the Internet.
The designers of PIM-SM argue that DVMRP and MOSPF were developed for The designers of PIM-SM argue that DVMRP and MOSPF were developed for
environments where group members are densely distributed, and bandwidth environments where group members are densely distributed, and bandwidth
is relatively plentiful. They emphasize that when group members and is relatively plentiful. They emphasize that when group members and
senders are sparsely distributed across a wide area, DVMRP and MOSPF senders are sparsely distributed across a wide area, DVMRP and MOSPF
do not provide the most efficient multicast delivery service. The exhibit inefficiencies (but both are efficient in delivering packets).
DVMRP periodically sends multicast packets over many links that do not The DVMRP periodically sends multicast packets over many links that do
lead to group members, while MOSPF can send group membership not lead to group members, while MOSPF propagates group membership
information over links that do not lead to senders or receivers. information across links that may not lead to senders or receivers.
7.3.1 PIM - Dense Mode (PIM-DM) 7.3.1 PIM - Dense Mode (PIM-DM)
While the PIM architecture was driven by the need to provide scaleable While the PIM architecture was driven by the need to provide scalable
sparse-mode delivery trees, PIM also defines a new dense-mode protocol sparse-mode delivery trees, PIM also defines a new dense-mode protocol
instead of relying on existing dense-mode protocols such as DVMRP and instead of relying on existing dense-mode protocols such as DVMRP and
MOSPF. It is envisioned that PIM-DM would be deployed in resource rich MOSPF. It is envisioned that PIM-DM would be deployed in resource rich
environments, such as a campus LAN where group membership is relatively environments, such as a campus LAN where group membership is relatively
dense and bandwidth is likely to be readily available. PIM-DM's control dense and bandwidth is likely to be readily available. PIM-DM's control
messages are similar to PIM-SM's by design. messages are similar to PIM-SM's by design.
[This space was intentionally left blank.]
PIM - Dense Mode (PIM-DM) is similar to DVMRP in that it employs the PIM - Dense Mode (PIM-DM) is similar to DVMRP in that it employs the
Reverse Path Multicasting (RPM) algorithm. However, there are several Reverse Path Multicasting (RPM) algorithm. However, there are several
important differences between PIM-DM and DVMRP: important differences between PIM-DM and the DVMRP:
o To find routes back to sources, PIM-DM relies on the presence o To find routes back to sources, PIM-DM relies on the presence
of an existing unicast routing table. PIM-DM is independent of of an existing unicast routing table. PIM-DM is independent of
the mechanisms of any specific unicast routing protocol. In the mechanisms of any specific unicast routing protocol. In
contrast, DVMRP contains an integrated routing protocol that contrast, DVMRP contains an integrated routing protocol that
makes use of its own RIP-like exchanges to build its own unicast makes use of its own RIP-like exchanges to build its own unicast
routing table (so a router may orient itself with respect to routing table (so a router may orient itself with respect to
active source(s)). MOSPF augments the information in the OSPF active source(s)).
link state database, thus MOSPF must run in conjunction with
OSPF.
o Unlike the DVMRP which calculates a set of child interfaces for o Unlike the DVMRP, which calculates a set of child interfaces for
each (source, group) pair, PIM-DM simply forwards multicast each (source, group) pair, PIM-DM simply forwards multicast
traffic on all downstream interfaces until explicit prune traffic on all downstream interfaces until explicit prune
messages are received. PIM-DM is willing to accept packet messages are received. PIM-DM is willing to accept packet
duplication to eliminate routing protocol dependencies and duplication to eliminate routing protocol dependencies and
to avoid the overhead inherent in determining the parent/child to avoid the overhead inherent in determining the parent/child
relationships. relationships.
For those cases where group members suddenly appear on a pruned branch For those cases where group members suddenly appear on a pruned branch
of the delivery tree, PIM-DM, like DVMRP, employs graft messages to of the delivery tree, PIM-DM--like DVMRP--employs graft messages to re-
re-attach the previously pruned branch to the delivery tree. attach the previously pruned branch to the delivery tree.
7.3.1.1 PIM-DM Tree Building and Forwarding Summary
Dense-mode PIM builds source-based trees by default. It uses the RPM
algorithm, which is what DVMRP uses: Thus, PIM-DM is a data-driven
protocol that floods packets to the edges of the PIM-DM domain, and
expects prunes to be returned on inactive branches. A minor difference
from DVMRP is that PIM-DM floods packets for new (source, group) pairs
on all non-incoming interfaces. PIM-DM trades off a bit of extra
flooding traffic for a simpler protocol design.
Pruning in PIM-DM only happens via explicit Prune messages, which are
multicast on broadcast links (if there are other routers present which
hear a prune, and they still wish to receive traffic for this group to
support active receivers that are downstream of them, then these other
routers must multicast PIM-Join packets to ensure they remain attached
to the distribution tree). Finally, PIM-DM uses a reliable graft
mechanism to enable previously-sent prunes to be "erased" when new
downstream group members appear after a prune had been sent.
Since PIM-DM uses RPM, it implements a reverse-path check on all packets
which it receives. Again, this check verifies that received packets
arrive on the interface that the router would use if it needed to
send a packet toward the source's prefix. Since PIM-DM does not have
its own routing protocol (as DVMRP does), it uses the existing unicast
routing protocol to locate itself with respect to the source(s) of
multicast packets it has seen.
8. "SPARSE MODE" ROUTING PROTOCOLS 8. "SPARSE MODE" ROUTING PROTOCOLS
The most recent additions to the set of multicast routing protocols are The most recent additions to the set of multicast routing protocols are
called "sparse mode" protocols. They are designed from a different called "sparse mode" protocols. They are designed from a different
perspective than the "dense mode" protocols that we have already perspective than the "dense mode" protocols that we have already
examined. Often, they are not data-driven, in the sense that forwarding examined. Often, they are not data-driven, in the sense that forwarding
state is set up in advance, and they trade off using bandwidth liberally state is set up in advance, and they trade off using bandwidth liberally
(which is a valid thing to do in a campus LAN environment) for other (which is a valid thing to do in a campus LAN environment) for other
techniques that are much more suited to scaling over large WANs, where techniques that are much more suited to scaling over large WANs, where
skipping to change at page 42, line 5 skipping to change at page 44, line 30
o Core-Based Trees (CBT). o Core-Based Trees (CBT).
While these routing protocols are designed to operate efficiently over a While these routing protocols are designed to operate efficiently over a
wide area network where bandwidth is scarce and group members may be wide area network where bandwidth is scarce and group members may be
quite sparsely distributed, this is not to imply that they are only quite sparsely distributed, this is not to imply that they are only
suitable for small groups. Sparse doesn't mean small, rather it is suitable for small groups. Sparse doesn't mean small, rather it is
meant to convey that the groups are widely dispersed, and thus it is meant to convey that the groups are widely dispersed, and thus it is
wasteful to (for instance) flood their data periodically across the wasteful to (for instance) flood their data periodically across the
entire internetwork. entire internetwork.
8.1 Protocol-Independent Multicast - Sparse Mode (PIM-SM) CBT and PIM-Sparse Mode (PIM-SM) have been designed to provide highly
efficient communication between members of sparsely distributed groups--
As described previously, PIM also defines a "dense-mode" or source-based the type of groups that are likely to be prevalent in wide-area
tree variant. Again, the two protocols are quite unique, and other than internetworks. The designers of these sparse-mode protocols' have
control messages, they have very little in common. Note that while PIM observed that several hosts participating in a multicast session do not
integrates control message processing and data packet forwarding among require periodically broadcasting their traffic across the entire
PIM-Sparse and -Dense Modes, PIM-SM and PIM-DM must run in separate internetwork.
regions. All groups in a region are either sparse-mode or dense-mode.
PIM-Sparse Mode (PIM-SM) has been developed to provide a multicast
routing protocol that provides efficient communication between members
of sparsely distributed groups--the type of groups that are likely to
be common in wide-area internetworks. PIM's designers observed that
several hosts wishing to participate in a multicast conference do not
justify flooding the entire internetwork periodically with the group's
multicast traffic.
Noting today's existing MBone scaling problems, and extrapolating to a Noting today's existing MBone scaling problems, and extrapolating to a
future of ubiquitous multicast (overlaid with perhaps thousands of future of ubiquitous multicast (overlaid with perhaps thousands of
small, widely dispersed groups), it is not hard to imagine that existing widely scattered groups), it is not hard to imagine that existing
multicast routing protocols will experience scaling problems. To multicast routing protocols will experience scaling problems. To
eliminate these potential scaling issues, PIM-SM is designed to limit mitigate these potential scaling issues, PIM-SM and CBT are designed
multicast traffic so that only those routers interested in receiving to ensure that multicast traffic only crosses routers that have
traffic for a particular group "see" it. explicitly joined a shared tree on behalf of group members.
8.1 Protocol-Independent Multicast - Sparse Mode (PIM-SM)
As described previously, PIM also defines a "dense-mode" or source-based
tree variant. Again, the two protocols are quite unique, and other than
control messages, they have very little in common. Note that although
PIM integrates control message processing and data packet forwarding
among PIM-Sparse and -Dense Modes, PIM-SM and PIM-DM must never run in
the same region at the same time. Essentially, a region is a set of
routers which are executing a common multicast routing protocol.
PIM-SM differs from existing dense-mode protocols in two key ways: PIM-SM differs from existing dense-mode protocols in two key ways:
o Routers with adjacent or downstream members are required to o Routers with adjacent or downstream members are required to
explicitly join a sparse mode delivery tree by transmitting explicitly join a sparse mode delivery tree by transmitting
join messages. If a router does not join the pre-defined join messages. If a router does not join the pre-defined
delivery tree, it will not receive multicast traffic addressed delivery tree, it will not receive multicast traffic addressed
to the group. to the group.
In contrast, dense-mode protocols assume downstream group In contrast, dense-mode protocols assume downstream group
membership and forward multicast traffic on downstream links membership and forward multicast traffic on downstream links
until explicit prune messages are received. Thus, the default until explicit prune messages are received. Thus, the default
forwarding action of dense-mode routing protocols is to forward forwarding action of dense-mode routing protocols is to forward
all traffic, while the default action of a sparse-mode protocol all traffic, while the default action of a sparse-mode protocol
is to block traffic unless it has been explicitly requested. is to block traffic unless it has been explicitly requested.
o PIM-SM evolved from the Core-Based Trees (CBT) approach in that o PIM-SM evolved from the Core-Based Trees (CBT) approach in that
it employs the concept of a "core" (or rendezvous point (RP) in it employs the concept of a "core" (or rendezvous point (RP) in
PIM-SM terminology) where receivers "meet" sources. PIM-SM terminology) where receivers "meet" sources.
[This space was intentionally left blank.]
======================================================================== ========================================================================
S1 S2 S1 S2
___|___ ___|___ ___|___ ___|___
| | | |
| | | |
# # # #
\ / \ /
\ /
\_____________RP______________/ \_____________RP______________/
./|\. ./|\.
________________// | \\_______________ ________________// | \\_______________
/ _______/ | \______ \ / _______/ | \______ \
# # # # # # # # # #
___|___ ___|___ ___|___ ___|___ ___|___ ___|___ ___|___ ___|___ ___|___ ___|___
| | | | | | | | | | | |
R R R R R R R R R R R R
LEGEND LEGEND
# PIM Router # PIM Router
R Multicast Receiver R Multicast Receiver
Figure 17: Rendezvous Point Figure 17: Rendezvous Point
======================================================================== ========================================================================
When joining a group, each receiver uses IGMP to notify its directly- When joining a group, each receiver uses IGMP to notify its directly-
attached router, which in turn joins the multicast delivery tree by attached router, which in turn joins the multicast delivery tree by
sending an explicit PIM-Join message hop-by-hop toward the group's sending an explicit PIM-Join message hop-by-hop toward the group's RP.
RP. A source uses the RP to announce its presence, and act as a conduit A source uses the RP to announce its presence, and act as a conduit to
to members that have joined the group. This model requires sparse-mode members that have joined the group. This model requires sparse-mode
routers to maintain a bit of state (the RP-set for the sparse-mode
region) prior to the arrival of data. In contrast, because dense-mode
protocols are data-driven, they do not store any state for a group until
the arrival of its first data packet.
There is only one RP-set per sparse-mode domain, not per group. routers to maintain a small amount of state (the RP-set for the sparse-
Moreover, the creator of a group is not involved in RP selection. Also, mode region) prior to the arrival of data.
there is no such concept as a "primary" RP. Each group has precisely
one RP at any given time. In the event of the failure of an RP, a new There is only one RP-set per sparse-mode domain. By using a hash
RP-set is distributed which does not include the failed RP. function, each PIM-SM router can map a group address uniquely to one of
the members of the RP-set (to determine the group's RP). At any given
time, each group has precisely one RP. In the event of the failure of
an RP, a new RP-set is distributed which does not include the failed RP.
8.1.1 Directly Attached Host Joins a Group 8.1.1 Directly Attached Host Joins a Group
When there is more than one PIM router connected to a multi-access LAN, When there is more than one PIM router connected to a multi-access LAN,
the router with the highest IP address is selected to function as the the router with the highest IP address is selected to function as the
Designated Router (DR) for the LAN. The DR may or may not be Designated Router (DR) for the LAN. The DR sends Join/Prune messages
responsible for the transmission of IGMP Host Membership Query messages, toward the RP.
but does send Join/Prune messages toward the RP, and maintains the
status of the active RP for local senders to multicast groups.
When the DR receives an IGMP Report message for a new group, the DR When the DR receives an IGMP Report message for a new group, it performs
determines if the group is RP-based or not by examining the group a deterministic hash function over the sparse-mode region's RP-set to
address. If the address indicates a SM group (by virtue of the group- uniquely determine the RP for the group.
specific state that even inactive groups have stored in all PIM
routers), the DR performs a deterministic hash function over the
sparse-mode region's RP-set to uniquely determine the RP for the
group.
======================================================================== ========================================================================
Source (S) Source (S)
_|____ _|_____
| |
| |
# #
/ \ / \
/ \ / \
/ \ / \
# # # #
/ \ | / \
Designated / \ | / \
Host | Router / \ Rendezvous Point Host | / \
-----|- # - - - - - -#- - - - - - - -RP for group G -----|- DR - - - - - -#- - - - - - - -RPg
(receiver) | ----Join--> ----Join--> (receiver) | ----Join--> ----Join-->
|
LEGEND LEGEND
# PIM Router RP Rendezvous Point # PIM Router RPg Rendezvous Point for group g
Figure 18: Host Joins a Multicast Group Figure 18: Host Joins a Multicast Group
======================================================================== ========================================================================
After performing the lookup, the DR creates a multicast forwarding entry After performing the lookup, the DR creates a multicast forwarding entry
for the (*, group) pair and transmits a unicast PIM-Join message toward for the (*, group) pair and transmits a unicast PIM-Join message toward
the primary RP for this specific group. The (*, group) notation the RP for this specific group. The (*, group) notation indicates an
indicates an (any source, group) pair. The intermediate routers forward (any source, group) pair. The intermediate routers forward the unicast
the unicast PIM-Join message, creating a forwarding entry for the PIM-Join message, creating a forwarding entry for the (*, group) pair
(*, group) pair only if such a forwarding entry does not yet exist.
Intermediate routers must create a forwarding entry so that they will be only if such a forwarding entry does not yet exist. Intermediate
able to forward future traffic downstream toward the DR which originated routers must create a forwarding entry so that they will be able to
the PIM-Join message. forward future traffic downstream toward the DR which originated the
PIM-Join message.
8.1.2 Directly Attached Source Sends to a Group 8.1.2 Directly Attached Source Sends to a Group
When a source first transmits a multicast packet to a group, its DR When a source first transmits a multicast packet to a group, its DR
forwards the datagram to the primary RP for subsequent distribution forwards the datagram to the RP for subsequent distribution along the
along the group's delivery tree. The DR encapsulates the initial group's delivery tree. The DR encapsulates the initial multicast
packets in PIM-SM-Register packets and unicasts them toward the group's
multicast packets in a PIM-SM-Register packet and unicasts them toward RP. The PIM-SM-Register packets inform the RP of a new source. The RP
the primary RP for the group. The PIM-SM-Register packet informs the may then elect to transmit PIM-Join messages back toward the source's DR
RP of a new source which causes the active RP to transmit PIM-Join to join this source's shortest-path tree, which will allow future
messages back toward the source's DR. The routers between the RP and unencapsulated packets to flow from this source's DR to the group's RP.
the source's DR use the received PIM-Join messages (from the RP) to
create forwarding state for the new (source, group) pair. Now all
routers from the active RP for this sparse-mode group to the source's DR
will be able to forward future unencapsulated multicast packets from
this source subnetwork to the RP. Until the (source, group) state has
been created in all the routers between the RP and source's DR, the DR
must continue to send the source's multicast IP packets to the RP as
unicast packets encapsulated within unicast PIM-Register packets. The
DR may stop forwarding multicast packets encapsulated in this manner
once it has received a PIM-Register-Stop message from the active RP for
this group. The RP may send PIM-Register-Stop messages if there are no
downstream receivers for a group, or if the RP has successfully joined
the (source, group) tree (which originates at the source's DR).
======================================================================== ========================================================================
Source (S) Source (S)
_|____ _|____
| |
| |
# v DR v
/.\ , / ^\ v
/ ^ \ v
/ ^\ v / ^\ v
/ .\ ,
# ^# v # ^# v
/ .\ , / ^ \ v
Designated / ^\ v / ^ \ v
Host | Router / .\ , | Host Host | / ^ \ v | Host
-----|-#- - - - - - -#- - - - - - - -RP- - - # - - -|----- ------|- # - - - - - -#- - - - - - - - RP - - - # - -|-----
(receiver) | <~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~> | (receiver) (receiver) | <~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~> | (receiver)
LEGEND LEGEND
# PIM Router # PIM Router v v v PIM-SM-Register
RP Rendezvous Point RP Rendezvous Point ^ ^ ^ PIM-Join
> , > PIM-Register
< . < PIM-Join
~ ~ ~ Resend to group members ~ ~ ~ Resend to group members
Figure 19: Source sends to a Multicast Group Figure 19: Source sends to a Multicast Group
======================================================================== ========================================================================
Unless the RP decides to join the source's SPT, rooted at the source's
DR, the (source, group) state is not created in all the routers between
source's DR and the RP, and the DR must continue to send the source's
multicast IP packets to the RP as unicast packets encapsulated within
unicast PIM-SM-Register packets. The DR may stop forwarding multicast
packets encapsulated in this manner once it has received a PIM-Register-
Stop message from the group's RP. The RP may send PIM-Register-Stop
messages if there are no downstream receivers for a group, or if the RP
has successfully joined the source's shortest-path tree (SPT).
8.1.3 Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? 8.1.3 Shared Tree (RP-Tree) or Shortest Path Tree (SPT)?
The RP-tree provides connectivity for group members but does not The RP-tree provides connectivity for group members but does not
optimize the delivery path through the internetwork. PIM-SM allows optimize the delivery path through the internetwork. PIM-SM allows
routers to either a) continue to receive multicast traffic over the routers to either a) continue to receive multicast traffic over the
shared RP-tree, or b) subsequently create a source-based shortest-path shared RP-tree, or b) subsequently create a source-based shortest-path
tree on behalf of their attached receiver(s). Besides reducing the tree on behalf of their attached receiver(s). Besides reducing the
delay between this router and the source (beneficial to its attached delay between this router and the source (beneficial to its attached
receivers), the shared tree also reduces traffic concentration effects receivers), the shared tree also reduces traffic concentration effects
on the RP-tree. on the RP-tree.
A PIM-SM router with local receivers has the option of switching to the A PIM-SM router with local receivers has the option of switching to the
source's shortest-path tree (i.e., source-based tree) once it starts source's shortest-path tree (i.e., source-based tree) once it starts
receiving data packets from the source. The change-over may be receiving data packets from the source. The change-over may be
triggered if the data rate from the source exceeds a predefined triggered if the data rate from the source exceeds a predefined
threshold. The local receiver's last-hop router does this by sending a threshold. The local receiver's last-hop router does this by sending a
Join message toward the active source. After the source-based SPT is PIM-Join message toward the active source. After the source-based SPT
active, protocol mechanisms allow a Prune message for the same source is active, protocol mechanisms allow a Prune message for that source to
to be transmitted to the active RP, thus removing this router from the be transmitted to the group's RP, thus removing this router from the
shared RP-tree. Alternatively, the DR may be configured to continue this source's shared RP-tree. Alternatively, the DR may be configured
using the shared RP-tree and never switch over to the source-based SPT, to never switch over to the source-based SPT, or some other metric might
or a router could perhaps use a different administrative metric to be used to control when to switch to a source-based SPT.
decide if and when to switch to a source-based tree.
======================================================================== ========================================================================
Source (S) LEGEND Source
_|____ (S)
| # PIM Router _|____
%| RP Rendezvous Point |
% # ` ` RP-Tree (Shared) |
% / \* % % Shortest-Path Tree % # `
% / \* (Source-based) % / \ `
% / \* % / \ `
Designated % # #* % / \ `
Router % / \* Designated % # # `
% / \* Router % / \ `
Host | <-% % % % % % / \v % / \ `
-----|-#- - - - - - -#- - - - - - - -RP Host | <-% % % % % % / \ `
(receiver) | <* * * * * * * * * * * * * * * -----|-#-------------#---------------RP
(receiver) | <-` ` ` ` ` ` ` ` ` ` ` ` ` ` `
| |
LEGEND
# PIM Router
RP Rendezvous Point
* * RP-Tree (Shared)
% % Shortest-Path Tree (Source-based)
Figure 20: Shared RP-Tree and Shortest Path Tree (SPT) Figure 20: Shared RP-Tree and Shortest Path Tree (SPT)
======================================================================== ========================================================================
Besides a last-hop router being able to switch to a source-based tree, Besides a last-hop router being able to switch to a source-based tree,
there is also the capability of the RP for a group to transition to a there is also the capability of the RP for a group to transition to a
source's shortest-path tree. Similar controls (bandwidth threshhold, source's shortest-path tree. Similar controls (bandwidth threshold,
administrative weights, etc.) can be used at an RP to influence these administrative metrics, etc.) may be used at an RP to influence these
decisions. decisions. The RP only joins the source's DR's SPT if local policy
controls permit this.
8.1.4 PIM-SM Tree Building and Forwarding Summary
PIM-SM can use both source-based and shared trees. In fact, a given
group may have some routers on its shared tree, and others on source-
based trees, simultaneously. By default, PIM-SM uses shared trees
rooted at the Rendezvous Point, but regardless of which tree type is in
use, there is no broadcasting of any traffic. Interested receivers use
IGMP to inform their local PIM-SM router(s), then the subnetwork's PIM-
SM Designated Router then issues PIM-Join messages (on behalf of the
receivers) toward the group's RP. These join messages establish
forwarding state in the intermediate routers which is used in the future
to make forwarding decisions. If a packet is received which has no
pre-established forwarding state, it is dropped.
As each packet is received, it must match a pre-existing forwarding
cache entry. If it does, then you know which interface on which to do
the a reverse-path check. This is consistent with the forwarding
technique used by PIM-DM, and similar to that used by DVMRP. The
unicast routing table provides the necessary information to determine
the best route toward the group's RP; the packet must have arrived on
the interface this router would use to send traffic toward the group's
RP. Note that the forwarding state which is created by PIM-SM is uni-
directional (it allows traffic to flow from the source's DR toward the
RP, not away from it).
8.2 Core Based Trees (CBT) 8.2 Core Based Trees (CBT)
Core Based Trees is another multicast architecture that is based on a Core Based Trees is another multicast architecture that is based on a
shared delivery tree. It is specifically intended to address the shared delivery tree. It is specifically intended to address the
important issue of scalability when supporting multicast applications important issue of scalability when supporting multicast applications
across the public Internet. across the public Internet, and is also suitable for use within private
intranetworks.
Similar to PIM-SM, CBT is protocol-independent. CBT employs the Similar to PIM-SM, CBT is protocol-independent. CBT employs the
information contained in the unicast routing table to build its shared information contained in the unicast routing table to build its shared
delivery tree. It does not care how the unicast routing table is delivery tree. It does not care how the unicast routing table is
derived, only that a unicast routing table is present. This feature derived, only that a unicast routing table is present. This feature
allows CBT to be deployed without requiring the presence of any specific allows CBT to be deployed without requiring the presence of any specific
unicast routing protocol. unicast routing protocol.
Another similarity to PIM-SM is that CBT has adopted the core discovery "Protocol independence" doesn't necessarily have to mean that multicast
mechanism ("bootstrap" ) defined in the PIM-SM specification. For paths are the same set of routers and links used by unicast routing,
inter-domain discovery, efforts are underway to standardize (or at least though it is easy to make this assumption (it may indeed hold true in
separately specify) a common RP/Core discovery mechanism. The intent is
that any shared tree protocol could implement this common discovery
mechanism using its own protocol message types.
In a significant departure from PIM-SM, CBT has decided to maintain it's some cases). An underlying routing protocol could collect both unicast-
scaling characteristics by not offering the option of shifting from a and multicast- related information, so that unicast routes could be
Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest Path Tree (SPT) to calculated based on the unicast information, and multicast routes based
optimize delay. The designers of CBT believe that this is a critical on the multicast information. If the path (set of intervening routers
and links) used between any two network nodes is the same for multicast
and for unicast, then it can be said that the unicast and multicast
topologies overlap (for that set of paths).
Where multicast and unicast topologies do happen to overlap, multicast
and unicast paths could be calculated from the one set of information
(i.e., unicast). However, if the set of routers and links between two
network nodes differs for multicast and unicast traffic, then the
unicast and multicast topologies are incongruent for that network path.
It's a matter of policy whether unicast and multicast topologies are
aligned for any set of network links.
The current version of the CBT specification has adopted a similar
"bootstrap" mechanism to that defined in the PIM-SM specification. It
is an implementation choice whether a dynamic or static mechanism is
used for discovering how groups map to cores. This process of
discovering which core serves which group(s) is what is referred to as
bootstrapping.
Each group has only one core, but one core might serve multiple groups.
Use of the dynamic bootstrap mechanism is only applicable within a
multicast region, not between regions. The advantage of the dynamic
approach is that a region's CBT routers need less configuration. The
disadvantage is that core placement could be particularly sub-optimal
for some set of receivers. Manual placement means that each group's
core can be "better" positioned relative to a group's members. CBT's
modular design allows other core discovery mechanisms to be used if such
mechanisms are considered more beneficial to CBT's requirements. For
inter-domain RP/Core discovery, efforts are underway to standardize (or
at least separately specify) a common mechanism, the intent being that
any shared tree protocol could implement this common interdomain
discovery architecture using its own protocol message types.
In a significant departure from PIM-SM, CBT has decided to maintain its
scaling characteristics by not offering the option of optimizing delay
by shifting from a Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest
Path Tree (SPT). The designers of CBT believe that this is a critical
decision since when multicasting becomes widely deployed, the need for decision since when multicasting becomes widely deployed, the need for
routers to maintain large amounts of state information will become the routers to maintain large amounts of state information will become the
overpowering scaling factor. overpowering scaling factor. Thus CBT's state information (i.e., its
forwarding cache) consists of: (group, incoming interface, {outgoing
interface list}). Forwarding decisions are made by using the group's
destination address as the search key into the forwarding cache.
Finally, unlike PIM-SM's shared tree state, CBT state is bi-directional. Finally, unlike PIM-SM's shared tree state, CBT state is bi-directional.
Data may therefore flow in either direction along a branch. Thus, data Data may therefore flow in either direction along a branch. Thus, data
from a source which is directly attached to an existing tree branch need from a source which is directly attached to an existing tree branch need
not be encapsulated. not be encapsulated.
8.2.1 Joining a Group's Shared Tree 8.2.1 Joining a Group's Shared Tree
A host that wants to join a multicast group issues an IGMP host A host that wants to join a multicast group issues an IGMP host
membership report. This message informs its local CBT-aware router(s) membership report. This message informs its local CBT-aware router(s)
that it wishes to receive traffic addressed to the multicast group. that it wishes to receive traffic addressed to the multicast group. Upon
Upon receipt of an IGMP host membership report for a new group, the receipt of an IGMP host membership report for a new group, the local CBT
local CBT router issues a JOIN_REQUEST hop-by-hop toward the group's router issues a JOIN_REQUEST which is processed hop-by-hop, creating
core router. transient join state (incoming interface, outgoing interface) in each
router traversed.
If the JOIN_REQUEST encounters a router that is already on the group's If the JOIN_REQUEST encounters a router that is already on the group's
shared tree before it reaches the core router, then that router issues a shared tree before it reaches the core router, then that router issues a
JOIN_ACK hop-by-hop back toward the sending router. If the JOIN_REQUEST JOIN_ACK hop-by-hop back toward the sending router. If the JOIN_REQUEST
does not encounter an on-tree CBT router along its path towards the does not encounter an on-tree CBT router along its path towards the
core, then the core router is responsible for responding with a core, then the core router is responsible for responding with a
JOIN_ACK. In either case, each intermediate router that forwards the JOIN_ACK. In either case, each intermediate router that forwards the
JOIN_REQUEST towards the core is required to create a transient "join JOIN_REQUEST towards the core is required to create a transient "join
state." This transient "join state" includes the multicast group, and state." This transient "join state" includes the multicast group, and
the JOIN_REQUEST's incoming and outgoing interfaces. This information the JOIN_REQUEST's incoming and outgoing interfaces. This information
allows an intermediate router to forward returning JOIN_ACKs along the allows an intermediate router to forward returning JOIN_ACKs along the
exact reverse path to the CBT router which initiated the JOIN_REQUEST. exact reverse path to the CBT router which initiated the JOIN_REQUEST.
As the JOIN_ACK travels towards the CBT router that issued the As the JOIN_ACK travels towards the CBT router that issued the
JOIN_REQUEST, each intermediate router creates new "active state" for JOIN_REQUEST, each intermediate router creates new "active state" for
this group. New branches are established by having the intermediate this group. New branches are established by having the intermediate
routers remember which interface is upstream, and which interface(s) routers remember which interface is upstream, and which interface(s)
is(are) downstream. Once a new branch is created, each child router is(are) downstream. Once a new branch is created, each child router
monitors the status of its parent router with a keepalive mechanism, monitors the status of its parent router with a keepalive mechanism,
the CBT "Echo" protocol. A child router periodically unicasts a the CBT "Echo" protocol. A child router periodically unicasts a
CBT_ECHO_REQUEST to its parent router, which is then required to respond ECHO_REQUEST to its parent router, which is then required to respond
with a unicast CBT_ECHO_REPLY message. with a unicast ECHO_REPLY message.
If, for any reason, the link between an on-tree router and its parent
should fail, or if the parent router is otherwise unreachable, the
on-tree router transmits a FLUSH_TREE message on its child interface(s)
which begins tearing down all the downstream branches for this group.
Each leaf router is then responsible for re-attaching itself to the
group's core, thus rebuilding the shared delivery tree. Leaf routers
only re-join the shared tree if they have at least one directly-
attached group member.
The Designated Router (DR) for a given broadcast-capable subnetwork is
elected by CBT's "Hello" protocol. It functions as the only upstream
router for all groups using that link. The DR is not necessarily the
best next-hop router to every core for every multicast group. The
implication is that it is possible for the DR to receive a JOIN_REQUEST
over a LAN, but the DR may need to redirect the JOIN_REQUEST back across
======================================================================== ========================================================================
#- - - -#- - - - -# #- - - -#- - - - -#
| \ | \
| # | #
| |
# - - - - # # - - - - #
member | | member | |
host --| | host --| |
| --Join--> --Join--> --Join--> | | --Join--> --Join--> --Join--> |
|- [DR] - - - [:] - - - -[:] - - - - [@] |- [DR] - - - [:] - - - -[:] - - - - [@]
| <--ACK-- <--ACK-- <--ACK-- | <--ACK-- <--ACK-- <--ACK--
|
LEGEND LEGEND
[DR] CBT Designated Router [DR] Local CBT Designated Router
[:] CBT Router [:] CBT Router
[@] Target Core Router [@] Core Router
# CBT Router that is already on the shared tree # CBT Router that is already on the shared tree
Figure 21: CBT Tree Joining Process Figure 21: CBT Tree Joining Process
======================================================================== ========================================================================
If, for any reason, the link between an on-tree router and its parent the same link to the best next-hop router toward a given group's core.
should fail, or if the parent router is otherwise unreachable, the Data traffic is never duplicated across a link, only JOIN_REQUESTs,
on-tree router transmits a FLUSH_TREE message on its child interface(s) which should be an inconsequential volume of traffic.
which initiates the tearing down of all downstream branches for the
multicast group. Each downstream router is then responsible for
re-attaching itself (provided it has a directly attached group member)
to the group's shared delivery tree.
The Designated Router (DR) is elected by CBT's "Hello" protocol and
functions as THE single upstream router for all groups using that link.
The DR is not necessarily the best next-hop router to every core for
every multicast group. The implication is that it is possible for a
JOIN_REQUEST to be redirected by the DR across a link to the best
next-hop router providing access a given group's core. Note that data
traffic is never duplicated across a link, only JOIN_REQUESTs, and the
volume of this JOIN_REQUEST traffic should be negligible.
8.2.2 Data Packet Forwarding 8.2.2 Data Packet Forwarding
When a JOIN_ACK is received by an intermediate router, it either adds When a JOIN_ACK is received by an intermediate router, it either adds
the interface over which the JOIN_ACK was received to an existing the interface over which the JOIN_ACK was received to an existing
forwarding cache entry, or creates a new entry if one does not already group's forwarding cache entry, or creates a new entry for the multicast
exist for the multicast group. When a CBT router receives a data packet group if one does not already exist. When a CBT router receives a data
addressed to the multicast group, it simply forwards the packet over all packet addressed to any multicast group, it simply forwards the packet
outgoing interfaces as specified by the forwarding cache entry for the over the outgoing interfaces specified by the group's forwarding cache
group. entry.
8.2.3 Non-Member Sending 8.2.3 Non-Member Sending
Similar to other multicast routing protocols, CBT does not require that Similar to other multicast routing protocols, CBT does not require that
the source of a multicast packet be a member of the multicast group. the source of a multicast packet be a member of the multicast group.
However, for a multicast data packet to reach the active core for the However, for a multicast data packet to reach the active core for the
group, at least one CBT-capable router must be present on the non-member group, at least one CBT-capable router must be present on the non-member
source station's subnetwork. The local CBT-capable router employs sender's subnetwork. The local CBT-capable router employs IP-in-IP
IP-in-IP encapsulation and unicasts the data packet to the active core encapsulation and unicasts the data packet to the active core for
for delivery to the rest of the multicast group. delivery to the rest of the multicast group. Thus, every CBT-capable
router in each CBT region needs a list of corresponding <core, group>
mappings
8.2.4 CBT Multicast Interoperability 8.2.4 CBT Tree Building and Forwarding Summary
CBT trees are constructed based on forward paths from the source(s) to
the core. Any group only has exactly one active core, as you recall.
As CBT is an explicit-join protocol, no routers forward traffic to a
group unless there are pre-established active receivers for that group
within the CBT domain.
Trees are built using each CBT router's unicast routing table, by
forwarding JOIN_REQUESTs toward a group's core. The resulting non-
source-specific state is bi-directional, a feature unique to CBT trees.
If any group member needs to send to the group, zero extra forwarding
state needs to be established...every possible receiver is already also
a potential sender, with no penalty of extra state (in the form of
(source, group) state), in the routers.
8.2.5 CBT Multicast Interoperability
Multicast interoperability is currently being defined. Work is underway Multicast interoperability is currently being defined. Work is underway
in the IDMR working group to describe the attachment of stub-CBT and in the IDMR working group to describe the attachment of stub-CBT and
stub-PIM domains to a DVMRP backbone. Future work will focus on stub-PIM domains to a DVMRP backbone. Future work will focus on
developing methods of connecting non-DVMRP transit domains to a DVMRP developing methods of connecting non-DVMRP transit domains to a DVMRP
backbone. backbone.
CBT interoperability will be achieved through the deployment of domain CBT interoperability will be achieved through the deployment of domain
border routers (BRs) which enable the forwarding of multicast traffic border routers (BRs) which enable the forwarding of multicast traffic
between the CBT and DVMRP domains. The BR implements DVMRP and CBT on between the CBT and DVMRP domains. The BR implements DVMRP and CBT on
different interfaces and is responsible for forwarding data across the different interfaces and is responsible for forwarding data across the
domain boundary. domain boundary.
The BR is also responsible for exporting selected routes out of the CBT
domain into the DVMRP domain. While the CBT stub domain never needs to
import routes, the DVMRP backbone needs to import routes to any sources
of traffic which are inside the CBT domain. The routes must be imported
so that DVMRP can perform its reverse-path check.
======================================================================== ========================================================================
/---------------\ /---------------\ / - - - - - - - \ /----------------\
| | | | | | |
| | | | DVMRP | +----+ | CBT |
| DVMRP |--[BR]--| CBT Domain | | ----| BR |----| |
| Backbone | | | Backbone | +----+ | Domain |
| | | | | | |
\---------------/ \---------------/ \- - - - - - - -/ \----------------/
Figure 22: Domain Border Routers (BRs) Figure 22: Domain Border Router
======================================================================== ========================================================================
The BR is also responsible for exporting selected routes out of the CBT 9. MULTICAST IP ROUTING: RELATED TOPICS
domain into the DVMRP domain. While the CBT stub domain never needs to
import routes, the DVMRP backbone needs to import routes to any sources
of traffic which are inside the CBT domain. The routes must be imported
so that DVMRP can perform its RPF check.
9. INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS To close this overview of multicast IP routing technology, we first
examine multicast routing protocol interoperability, then turn to
expanding-ring searches, which may not be equally-effective depending
on which multicast routing protocol is being used.
9.1 Interoperability Framework for Multicast Border Routers
In late 1996, the IETF IDMR working group began discussing a formal In late 1996, the IETF IDMR working group began discussing a formal
structure that would describe the way different multicast routing structure that would describe the way different multicast routing
protocols should interact inside a multicast border router (MBR). The protocols should interact inside a multicast border router (MBR). The
work can be found in the following internet draft: <draft-thaler- work-in-progress can be found in the following internet draft: <draft-
interop-00.ps>, or its successor. The draft covers explicit rules for thaler-interop-00.ps>, or its successor. The draft covers explicit
the major multicast routing protocols that existed at the end of 1996: rules for the major multicast routing protocols that existed at the end
DVMRP, MOSPF, PIM-DM, PIM-SM, and CBT, but applies to any potential of 1996: DVMRP, MOSPF, PIM-DM, PIM-SM, and CBT, but applies to any
multicast routing protocol as well. potential future multicast routing protocol(s) as well.
The IDMR standards work will focus on this generic inter-protocol MBR The IDMR standards work will focus on this generic inter-protocol MBR
scheme, rather than having to write 25 documents, 20 detailing how each scheme, rather than having to write 25 documents, 20 detailing how each
of those 5 protocols must interwork with the 4 others, plus 5 detailing of those 5 protocols must interwork with the 4 others, plus 5 detailing
how two disjoint regions running the same protocol must interwork. how two disjoint regions running the same protocol must interwork.
9.1 Requirements for Multicast Border Routers 9.1.1 Requirements for Multicast Border Routers
In order to ensure reliable multicast delivery across a network with an In order to ensure reliable multicast delivery across a network with an
arbitrary mixture of multicast routing protocols, some constraints are arbitrary mixture of multicast routing protocols, some constraints are
imposed to limit the scope of the problem space. imposed to limit the scope of the problem space.
Each multicast routing domain, or region, may be connected in a "tree Each multicast routing domain, or region, may be connected in a "tree
of regions" topology. If more arbitrary inter-regional topologies are of regions" topology. If more arbitrary inter-regional topologies are
desired, a hierarchical multicast routing protocol (such as, H-DVMRP) desired, a hierarchical multicast routing protocol (such as, H-DVMRP)
must be employed, because it carries topology information about how the must be employed, because it carries topology information about how the
regions are interconnected. Until this information is available, we regions are interconnected. Until this information is available, we
skipping to change at page 51, line 22 skipping to change at page 55, line 5
comply with <draft-thaler-interop-00.ps> have other characteristics and comply with <draft-thaler-interop-00.ps> have other characteristics and
duties, including: duties, including:
o The MBR consists at least two active routing components, each o The MBR consists at least two active routing components, each
an instance of some multicast routing protocol. No assumption is an instance of some multicast routing protocol. No assumption is
made about the type of routing protocol (e.g., broadcast-and-prune made about the type of routing protocol (e.g., broadcast-and-prune
or explicit-join; distance-vector or link-state; etc.) any component or explicit-join; distance-vector or link-state; etc.) any component
runs, or the nature of a "component". Multiple components running runs, or the nature of a "component". Multiple components running
the same protocol are allowed. the same protocol are allowed.
o An MBR forwards packets between two or more independent regions, with o A MBR forwards packets between two or more independent regions, with
one or more active interfaces per region, but only one component per one or more active interfaces per region, but only one component per
region. region.
o Each interface for which multicast is enabled is "owned" by exactly o Each interface for which multicast is enabled is "owned" by exactly
one of the components at a time. one of the components at a time.
o All components share a common forwarding cache of (S,G) entries, o All components share a common forwarding cache of (S,G) entries,
which are created when data packets are received, and can be which are created when data packets are received, and can be
deleted at any time. The component owning an interface is the only deleted at any time. The component owning an interface is the only
component that may change forwarding cache entries pertaining to component that may change forwarding cache entries pertaining to
that interface. Each forwarding cache entry has a single incoming that interface. Each forwarding cache entry has a single incoming
interface (iif) and a list of outgoing interfaces (oiflist). interface (iif) and a list of outgoing interfaces (oiflist).
[This space was intentionally left blank.] 9.2. ISSUES WITH EXPANDING-RING SEARCHES
Expanding-ring searches may be used when an end-station wishes to find
the closest example of a certain kind of server. One key assumption is
that each of these servers is equivalent in the service provided: Ask
any of them a question pertinent to that service and you should get the
same answer. For example, such a service is the DNS. The searching
client sends a query packet with the IP header's TTL field set to 1.
This packet will only reach its local subnetwork. If a response is not
heard within some timeout interval, a second query is emitted, this time
with the TTL set to 2. This process of sending and waiting for a
response, while incrementing the TTL, is what an expanding-ring search
is, fundamentally. Expanding-ring searches can facilitate resource-
discovery or auto-configuration protocols.
Another key assumption is that the multicast infrastructure provides the
ability to broadcast from a source equally well in all directions. This
usually implies that, at a minimum, all routers in an internetwork
support a multicast routing protocol.
There are two classes of routing protocols to consider when discussing
expanding-ring searches: DVMRP, MOSPF, and PIM-DM make up one class,
while PIM-SM and CBT comprise the other.
DVMRP supports expanding-ring searches fairly well for a reasonable
number of sources. The downside of using DVMRP is that there is source-
specific state kept across the entire DVMRP internetwork. As the number
of sources increases, the amount of state increases linearly. Due to
DVMRP's broadcast-and-prune nature, the tree for each source will
quickly converge to reach all receivers for a given group (limited to
receivers within the packet's TTL). If we assume that all routers in
your internetwork speak DVMRP, these TTL-scoped searches will have the
desired result: As the TTL is incremented, the packets will cross
successively further routers, radiating away from the source subnetwork.
MOSPF supports expanding-ring searches particularly well. It also keeps
source-specific state, so has the same scaling issues as DVMRP with
respect to state (forwarding cache) increasing linearly with the number
of sources. MOSPF's unique capability is that it knows the topology of
its local area. One by-product of this knowledge is that for a given
(source, group) pair, each MOSPF router knows the minimum TTL needed to
reach the closest group member. If a packet's TTL is not at least this
large, it need not be forwarded. This conserves bandwidth that would
otherwise be wasted by several iterations of successively larger TTLs.
However, since MOSPF is an explicit-join protocol, any servers wishing
to be found must join the search group in advance. Otherwise, MOSPF's
trees will not include these subnetworks.
PIM-DM is very similar to DVMRP in the context of expanding-ring
searches. If we continue to assume that all routers in a given
internetwork support the multicast routing protocol, then RPM (used by
PIM-DM and the DVMRP) will ensure that sources' traffic is broadcast
across the entire internetwork (limited only by the packet's initial
TTL), with no forwarding loops.
Shared-tree protocols do not have properties that necessarily lend
themselves to supporting expanding-ring searches. PIM-SM, by default,
does not build source-based trees. Consider the case of a sender on
a leaf subnet in a PIM-SM domain. Multicast packets sent with TTL=1
will only reach end-stations on the local subnetwork, but TTL=2 packets
will be tunneled inside PIM-SM-Register packets destined for the RP.
Once at the RP, the PIM-SM-Register wrapper is removed, exposing the
multicast packet, which should now have TTL=1 (the DR should have
decremented the TTL before forwarding the packet). The RP can now
forward the original packet over its attached downstream interfaces for
this group. Since the TTL=1, this is as far as they will go. Future
packets, with incrementally-higher TTLs, will radiate outward from the
RP along any downstream branches for this group. Thus, the search will
locate resources that are closest to the RP, not the source (unless the
RP and the source happen to be very close together).
CBT does not really have this problem, at least in the latest version,
since it does not need to encapsulate packets to get them to the group's
core. Also, CBT's state is bi-directional, so any receiver can also be
a sender with no tree-setup penalty. CBT, because it is a shared-tree
protocol, isn't as good as DVMRP, MOSPF, or PIM-DM for expanding-ring
searches, but it is better than PIM-SM: CBT tends to have more
symmetric distances, and "closer" on the core-based tree is more
correlated with "closer" in terms of network hops. However, CBT is not
perfect for these searches. The effectiveness of a CBT search depends
on the density of branch points of the group's distribution tree in the
immediate vicinity of the source. If we assume that all routers are
also CBT routers, then the search can be quite effective.
10. REFERENCES 10. REFERENCES
10.1 Requests for Comments (RFCs) 10.1 Requests for Comments (RFCs)
1075 "Distance Vector Multicast Routing Protocol," D. Waitzman, 1075 "Distance Vector Multicast Routing Protocol," D. Waitzman,
C. Partridge, and S. Deering, November 1988. C. Partridge, and S. Deering, November 1988.
1112 "Host Extensions for IP Multicasting," Steve Deering, 1112 "Host Extensions for IP Multicasting," Steve Deering,
August 1989. August 1989.
skipping to change at page 52, line 27 skipping to change at page 57, line 27
1584 "Multicast Extensions to OSPF," John Moy, March 1994. 1584 "Multicast Extensions to OSPF," John Moy, March 1994.
1585 "MOSPF: Analysis and Experience," John Moy, March 1994. 1585 "MOSPF: Analysis and Experience," John Moy, March 1994.
1700 "Assigned Numbers," J. Reynolds and J. Postel, October 1700 "Assigned Numbers," J. Reynolds and J. Postel, October
1994. (STD 2) 1994. (STD 2)
1812 "Requirements for IP version 4 Routers," Fred Baker, 1812 "Requirements for IP version 4 Routers," Fred Baker,
Editor, June 1995 Editor, June 1995
2000 "Internet Official Protocol Standards," Jon Postel, 2117 "Protocol Independent Multicast-Sparse Mode (PIM-SM):
Editor, February 1997. Protocol Specification," D. Estrin, D. Farinacci, A. Helmy,
D. Thaler; S. Deering, M. Handley, V. Jacobson, C. Liu,
P. Sharma, and L. Wei, July 1997.
10.2 Internet-Drafts 2189 "Core Based Trees (CBT version 2) Multicast Routing,"
A. Ballardie, September 1997.
"Core Based Trees (CBT) Multicast: Architectural Overview," 2200 "Internet Official Protocol Standards," Jon Postel, Editor,
<draft-ietf-idmr-cbt-arch-04.txt>, A. J. Ballardie, March 1997. June 1997. (STD 1)
"Core Based Trees (CBT) Multicast: Protocol Specification," <draft- 2201 "Core Based Trees (CBT) Multicast Routing Architecture,"
ietf-idmr-cbt-spec-07.txt>, A. J. Ballardie, March 1997. A. Ballardie, September 1997.
10.2 Internet-Drafts
"Core Based Tree (CBT) Multicast Border Router Specification for "Core Based Tree (CBT) Multicast Border Router Specification for
Connecting a CBT Stub Region to a DVMRP Backbone," <draft-ietf- Connecting a CBT Stub Region to a DVMRP Backbone," <draft-ietf-
idmr-cbt-dvmrp-00.txt>, A. J. Ballardie, March 1997. idmr-cbt-dvmrp-00.txt>, A. J. Ballardie, March 1997.
"Distance Vector Multicast Routing Protocol," <draft-ietf-idmr- "Distance Vector Multicast Routing Protocol," <draft-ietf-idmr-
dvmrp-v3-04.ps>, T. Pusateri, February 19, 1997. dvmrp-v3-04.ps>, T. Pusateri, February 19, 1997.
"Internet Group Management Protocol, Version 2," <draft-ietf- "Internet Group Management Protocol, Version 2," <draft-ietf-
idmr-igmp-v2-06.txt>, William Fenner, January 22, 1997. idmr-igmp-v2-06.txt>, William Fenner, January 22, 1997.
"Internet Group Management Protocol, Version 3," <draft-cain- "Internet Group Management Protocol, Version 3," <draft-cain-
igmp-00.txt>, Brad Cain, Ajit Thyagarajan, and Steve Deering, igmp-00.txt>, Brad Cain, Ajit Thyagarajan, and Steve Deering,
Expired. Expired.
"Protocol Independent Multicast-Dense Mode (PIM-DM): Protocol "Protocol Independent Multicast Version 2, Dense Mode Specification,"
Specification," <draft-ietf-idmr-pim-dm-spec-04.ps>, D. Estrin, <draft-ietf-idmr-pim-dm-spec-05.ps>, S. Deering, D. Estrin,
D. Farinacci, A. Helmy, V. Jacobson, and L. Wei, September 12, 1996. D. Farinacci, V. Jacobson, A. Helmy, and L. Wei, May 21, 1997.
"Protocol Independent Multicast-Sparse Mode (PIM-SM): Motivation "Protocol Independent Multicast-Sparse Mode (PIM-SM): Motivation
and Architecture," <draft-ietf-idmr-pim-arch-04.ps>, S. Deering, and Architecture," <draft-ietf-idmr-pim-arch-04.ps>, S. Deering,
D. Estrin, D. Farinacci, V. Jacobson, C. Liu, and L. Wei, D. Estrin, D. Farinacci, V. Jacobson, C. Liu, and L. Wei,
November 19, 1996. November 19, 1996.
"Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol
Specification," <draft-ietf-idmr-pim-sm-spec-09.ps>, D. Estrin,
D. Farinacci, A. Helmy, D. Thaler; S. Deering, M. Handley,
V. Jacobson, C. Liu, P. Sharma, and L. Wei, October 9, 1996.
(Note: Results of IESG review were announced on December 23, 1996:
This internet-draft is to be published as an Experimental RFC.)
"PIM Multicast Border Router (PMBR) specification for connecting "PIM Multicast Border Router (PMBR) specification for connecting
PIM-SM domains to a DVMRP Backbone," <draft-ietf-mboned-pmbr- PIM-SM domains to a DVMRP Backbone," <draft-ietf-mboned-pmbr-
spec-00.txt>, D. Estrin, A. Helmy, D. Thaler, Febraury 3, 1997. spec-00.txt>, D. Estrin, A. Helmy, D. Thaler, February 3, 1997.
"Administratively Scoped IP Multicast," <draft-ietf-mboned-admin-ip- "Administratively Scoped IP Multicast," <draft-ietf-mboned-admin-ip-
space-01.txt>, D. Meyer, December 23, 1996. space-03.txt>, D. Meyer, June 10, 1997.
"Interoperability Rules for Multicast Routing Protocols," <draft- "Interoperability Rules for Multicast Routing Protocols," <draft-
thaler-interop-00.txt>, D. Thaler, November 7, 1996. thaler-interop-00.txt>, D. Thaler, November 7, 1996.
See the IDMR home pages for an archive of specifications: See the IDMR home pages for an archive of specifications:
<URL:http://www.cs.ucl.ac.uk/ietf/public_idmr/> <URL:http://www.cs.ucl.ac.uk/ietf/public_idmr/>
<URL:http://www.ietf.org/html.charters/idmr-charter.html> <URL:http://www.ietf.org/html.charters/idmr-charter.html>
10.3 Textbooks 10.3 Textbooks
skipping to change at page 53, line 51 skipping to change at page 58, line 51
Stevens, W. Richard. TCP/IP Illustrated: Volume 1 The Protocols, Stevens, W. Richard. TCP/IP Illustrated: Volume 1 The Protocols,
Addison Wesley Publishing Company, Reading MA, 1994 Addison Wesley Publishing Company, Reading MA, 1994
Wright, Gary and W. Richard Stevens. TCP/IP Illustrated: Volume 2 Wright, Gary and W. Richard Stevens. TCP/IP Illustrated: Volume 2
The Implementation, Addison Wesley Publishing Company, Reading MA, The Implementation, Addison Wesley Publishing Company, Reading MA,
1995 1995
10.4 Other 10.4 Other
Dalal, Y. K., and Metcalfe, R. M., "Reverse Path Forwarding of
Broadcast Packets", Communications of the ACM, 21(12):1040-1048,
December 1978.
Deering, Steven E. "Multicast Routing in a Datagram Deering, Steven E. "Multicast Routing in a Datagram
Internetwork," Ph.D. Thesis, Stanford University, December 1991. Internetwork," Ph.D. Thesis, Stanford University, December 1991.
Ballardie, Anthony J. "A New Approach to Multicast Communication Ballardie, Anthony J. "A New Approach to Multicast Communication
in a Datagram Internetwork," Ph.D. Thesis, University of London, in a Datagram Internetwork," Ph.D. Thesis, University of London,
May 1995. May 1995.
"Hierarchical Distance Vector Multicast Routing for the MBone," "Hierarchical Distance Vector Multicast Routing for the MBone,"
Ajit Thyagarajan and Steve Deering, July 1995. Ajit Thyagarajan and Steve Deering, Proceedings of the ACM SIGCOMM,
pages 60-66, October, 1995.
11. SECURITY CONSIDERATIONS 11. SECURITY CONSIDERATIONS
Security issues are not discussed in this memo. As with unicast routing, the integrity and accuracy of the multicast
routing information is important to the correct operation of the
multicast segments of the Internet. Lack of authentication of routing
protocol updates can permit an adversary to inject incorrect routing
data and cause multicast routing to break or flow in unintended
directions. Some existing multicast routing protocols (e.g., MOSPF) do
support cryptographic authentication of their protocol exchanges. More
detailed discussion of multicast routing protocol security is left to
the specifications of those routing protocols.
Lack of authentication of IGMP can permit an adversary to inject false
IGMP messages on a directly attached subnet. Such messages could cause
unnecessary traffic to be transmitted to that subnet (e.g., via a forged
JOIN) or could cause desired traffic to not be transmitted to that
subnet (e.g., via a forged LEAVE). If this is considered to be an
issue, one could use the IP Authentication Header [RFC-1825, RFC-1826]
to provide cryptographic authentication of the IGMP messages. The
reader should consult the IGMPv2 specification for additional
information on this topic.
Security issues in multicast data traffic are beyond the scope of this
document.
12. ACKNOWLEDGEMENTS 12. ACKNOWLEDGEMENTS
This RFC would not have been possible without the encouragement of Mike This RFC would not have been possible without the encouragement of Mike
O'Dell and the support of Joel Halpern and David Meyer. Also invaluable O'Dell and the support of David Meyer and Joel Halpern. Also invaluable
was the feedback and comments of the IETF MBoneD and IDMR working groups. was the feedback and comments from the IETF MBoneD and IDMR working
Certain people spent considerable time commenting on and discussing this groups. A number of people spent considerable time commenting on and
paper with the authors, and deserve to be mentioned by name: Tony discussing this paper with the authors, and deserve to be mentioned by
Ballardie, Steve Casner, Jon Crowcroft, Steve Deering, Bill Fenner, Hugh name: Kevin Almeroth, Ran Atkinson, Tony Ballardie, Steve Casner, Jon
Holbrook, Cyndi Jung, Shuching Shieh, Dave Thaler, and Nair Venugopal. Crowcroft, Steve Deering, Bill Fenner, Hugh Holbrook, Cyndi Jung, John
Our apologies to anyone we unintentionally neglected to list here. Moy, Shuching Shieh, Dave Thaler, and Nair Venugopal. If we neglected
to mention anyone here, please accept our sincerest apologies.
13. AUTHORS' ADDRESSES 13. AUTHORS' ADDRESSES
Tom Maufer Tom Maufer Chuck Semeria
3Com Corporation
5400 Bayfront Plaza
P.O. Box 58145
Santa Clara, CA 95052-8145
Phone: +1 408 764-8814
Email: <maufer@3Com.com>
Chuck Semeria 3Com Corporation 3Com Corporation
3Com Corporation 5400 Bayfront Plaza 5400 Bayfront Plaza
5400 Bayfront Plaza Mailstop 5247 Mailstop 2218
P.O. Box 58145 P.O. Box 58145 P.O. Box 58145
Santa Clara, CA 95052-8145 Santa Clara, CA 95052-8145 Santa Clara, CA 95052-8145
Phone: +1 408 764-7201 Phone: +1 408 764-8814 Phone: +1 408 764-7201
Email: <semeria@3Com.com> Email: <maufer@3Com.com> Email: <semeria@3Com.com>
 End of changes. 

This html diff was produced by rfcdiff 1.23, available from http://www.levkowetz.com/ietf/tools/rfcdiff/