draft-ietf-ccamp-transport-nbi-app-statement-00.txt   draft-ietf-ccamp-transport-nbi-app-statement-01.txt 
CCAMP Working Group I. Busi (Ed.) CCAMP Working Group I. Busi
Internet Draft Huawei Internet Draft Huawei
Intended status: Informational D. King Intended status: Informational D. King
Lancaster University Lancaster University
H. Zheng
Huawei
Y. Xu
CAICT
Expires: August 26, 2018 February 26, 2018 Expires: September 2018 March 5, 2018
Transport Northbound Interface Applicability Statement and Use Cases Transport Northbound Interface Applicability Statement
draft-ietf-ccamp-transport-nbi-app-statement-00 draft-ietf-ccamp-transport-nbi-app-statement-01
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
skipping to change at page 1, line 33 skipping to change at page 1, line 37
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on August 26, 2018. This Internet-Draft will expire on September 5, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with
to this document. Code Components extracted from this document must respect to this document. Code Components extracted from this
include Simplified BSD License text as described in Section 4.e of document must include Simplified BSD License text as described in
the Trust Legal Provisions and are provided without warranty as Section 4.e of the Trust Legal Provisions and are provided without
described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Abstract Abstract
Transport network domains, including Optical Transport Network (OTN) Transport network domains, including Optical Transport Network (OTN)
and Wavelength Division Multiplexing (WDM) networks, are typically and Wavelength Division Multiplexing (WDM) networks, are typically
deployed based on a single vendor or technology platforms. They are deployed based on a single vendor or technology platforms. They are
often managed using proprietary interfaces to dedicated Element often managed using proprietary interfaces to dedicated Element
Management Systems (EMS), Network Management Systems (NMS) and Management Systems (EMS), Network Management Systems (NMS) and
increasingly Software Defined Network (SDN) controllers. increasingly Software Defined Network (SDN) controllers.
A well-defined open interface to each domain management system or A well-defined open interface to each domain management system or
controller is required for network operators to facilitate control controller is required for network operators to facilitate control
automation and orchestrate end-to-end services across multi-domain automation and orchestrate end-to-end services across multi-domain
networks. These functions may be enabled using standardized data networks. These functions may be enabled using standardized data
models (e.g. YANG), and appropriate protocol (e.g., RESTCONF). models (e.g. YANG), and appropriate protocol (e.g., RESTCONF).
This document describes the key use cases and requirements to be This document analyses the applicability of the YANG models being
used as the basis for applicability statements analyzing how IETF defined by IETF (TEAS and CCAMP WGs in particular) to support OTN
data models can be used for transport network control and single and multi-domain scenarios.
management.
Table of Contents Table of Contents
1. Introduction ................................................ 3 1. Introduction..................................................3
1.1. Scope of this document 4 1.1. Scope of this document...................................4
2. Terminology ................................................. 4 1.2. Assumptions..............................................5
3. Conventions used in this document 4 2. Terminology...................................................5
3.1. Topology and traffic flow processing ................... 4 3. Conventions used in this document.............................6
4. Use Case 1: Single-domain with single-layer ................. 5 3.1. Topology and traffic flow processing.....................6
4.1. Reference Network ...................................... 5 3.2. JSON code................................................7
4.1.1. Single Transport Domain - OTN Network ............. 5 4. Scenarios Description.........................................8
4.2. Topology Abstractions .................................. 8 4.1. Reference Network........................................8
4.3. Service Configuration .................................. 9 4.1.1. Single-Domain Scenario.............................10
4.3.1. ODU Transit ....................................... 9 4.1.2. Multi-Domain Scenario..............................10
4.3.2. EPL over ODU ..................................... 10 4.2. Topology Abstractions...................................10
4.3.3. Other OTN Client Services ........................ 10 4.3. Service Configuration...................................12
4.3.4. EVPL over ODU .................................... 11 4.3.1. ODU Transit........................................13
4.3.5. EVPLAN and EVPTree Services ...................... 12 4.3.2. EPL over ODU.......................................13
4.4. Multi-functional Access Links ......................... 13 4.3.3. Other OTN Clients Services.........................14
4.5. Protection Requirements ............................... 14 4.3.4. EVPL over ODU......................................15
4.5.1. Linear Protection ................................ 15 4.3.5. EVPLAN and EVPTree Services........................16
5. Use Case 2: Single-domain with multi-layer ................. 15 4.3.6. Dynamic Service Configuration......................18
5.1. Reference Network ..................................... 15
5.2. Topology Abstractions ................................. 16 4.4. Multi-function Access Links.............................18
5.3. Service Configuration ................................. 16 4.5. Protection and Restoration Configuration................19
6. Use Case 3: Multi-domain with single-layer ................. 16 4.5.1. Linear Protection (end-to-end).....................20
6.1. Reference Network ..................................... 16 4.5.2. Segmented Protection...............................21
6.2. Topology Abstractions ................................. 19 4.5.3. End-to-End Dynamic Restoration.....................21
6.3. Service Configuration ................................. 19 4.5.4. Segmented Dynamic Restoration......................22
6.3.1. ODU Transit ...................................... 20 4.6. Service Modification and Deletion.......................23
6.3.2. EPL over ODU ..................................... 20 4.7. Notification............................................23
6.3.3. Other OTN Client Services ........................ 21 4.8. Path Computation with Constraint........................23
6.3.4. EVPL over ODU .................................... 21 5. YANG Model Analysis..........................................24
6.3.5. EVPLAN and EVPTree Services ...................... 21 5.1. YANG Models for Topology Abstraction....................24
6.4. Multi-functional Access Links ......................... 22 5.1.1. Domain 1 Topology Abstraction......................25
6.5. Protection Scenarios .................................. 22 5.1.2. Domain 2 Grey (Type A) Topology Abstraction........26
6.5.1. Linear Protection (end-to-end) ................... 23 5.1.3. Domain 3 Grey (Type B) Topology Abstraction........26
6.5.2. Segmented Protection ............................. 23 5.1.4. Multi-domain Topology Stitching....................26
7. Use Case 4: Multi-domain and multi-layer ................... 24 5.1.5. Access Links.......................................27
7.1. Reference Network ..................................... 24 5.2. YANG Models for Service Configuration...................28
7.2. Topology Abstractions ................................. 25 5.2.1. ODU Transit Service................................30
7.3. Service Configuration ................................. 25 5.2.2. EPL over ODU Service...............................32
8. Security Considerations .................................... 25 5.2.3. Other OTN Client Services..........................33
9. IANA Considerations ........................................ 26 5.2.4. EVPL over ODU Service..............................34
10. References ................................................ 26 5.3. YANG Models for Protection Configuration................35
10.1. Normative References ................................. 26 5.3.1. Linear Protection (end-to-end).....................35
10.2. Informative References ............................... 26 5.3.2. Segmented Protection...............................35
11. Acknowledgments ........................................... 27 6. Detailed JSON Examples.......................................35
6.1. JSON Examples for Topology Abstractions.................35
6.1.1. Domain 1 White Topology Abstraction................35
6.2. JSON Examples for Service Configuration.................35
6.2.1. ODU Transit Service................................35
6.3. JSON Example for Protection Configuration...............36
7. Security Considerations......................................36
8. IANA Considerations..........................................36
9. References...................................................36
9.1. Normative References....................................36
9.2. Informative References..................................37
10. Acknowledgments.............................................38
Appendix A. Detailed JSON Examples..............................39
A.1. JSON Code: mpi1-otn-topology.json.......................39
A.2. JSON Code: mpi1-odu2-service-config.json...............39
Appendix B. Validating a JSON fragment against a YANG Model.....40
B.1. DSDL-based approach.....................................40
B.2. Why not using a XSD-based approach......................40
1. Introduction 1. Introduction
Transport of packet services are critical for a wide-range of Transport of packet services are critical for a wide-range of
applications and services, including: data center and LAN applications and services, including: data center and LAN
interconnects, Internet service backhauling, mobile backhaul and interconnects, Internet service backhauling, mobile backhaul and
enterprise Carrier Ethernet Services. These services are typically enterprise Carrier Ethernet Services. These services are typically
setup using stovepipe NMS and EMS platforms, often requiring setup using stovepipe NMS and EMS platforms, often requiring
propriety management platforms and legacy management interfaces. A propriety management platforms and legacy management interfaces. A
clear goal of operators will be to automate setup of transport clear goal of operators will be to automate setup of transport
services across multiple transport technology domains. services across multiple transport technology domains.
A common open interface (API) to each domain controller and or A common open interface (API) to each domain controller and or
management system is pre-requisite for network operators to control management system is pre-requisite for network operators to control
multi-vendor and multi-domain networks and enable also service multi-vendor and multi-domain networks and enable also service
provisioning coordination/automation. This can be achieved by using provisioning coordination/automation. This can be achieved by using
standardized YANG models, used together with an appropriate protocol standardized YANG models, used together with an appropriate protocol
(e.g., RESTCONF). (e.g., RESTCONF).
This document describes key use cases for analyzing the This document analyses the applicability of the YANG models being
applicability of the models defined by the IETF for transport defined by IETF (TEAS and CCAMP WGs in particular) to support OTN
networks. The intention of this document is to provide the base single and multi-domain scenarios.
reference scenarios for applicability statements that will describe
in details how IETF transport models are applied to solve the
described use cases and requirements.
1.1. Scope of this document 1.1. Scope of this document
This document assumes a reference architecture, including This document assumes a reference architecture, including
interfaces, based on the Abstraction and Control of Traffic- interfaces, based on the Abstraction and Control of Traffic-
Engineered Networks (ACTN), defined in [ACTN-Frame] Engineered Networks (ACTN), defined in [ACTN-Frame].
The focus of this document is on the MPI (interface between the The focus of this document is on the MPI (interface between the
Multi Domain Service Coordinator (MDSC) and a Physical Network Multi Domain Service Coordinator (MDSC) and a Physical Network
Controller (PNC), controlling a transport network domain). Controller (PNC), controlling a transport network domain).
It is worth noting that the same MPI analyzed in this document could
be used between hierarchical MDSC controllers, as shown in Figure 4
of [ACTN-Frame].
Detailed analysis of the CMI (interface between the Customer Network
Controller (CNC) and the MDSC) as well as of the interface between
service and network orchestrators are outside the scope of this
document. However, some considerations and assumptions about the
information could be described when needed.
The relationship between the current IETF YANG models and the type The relationship between the current IETF YANG models and the type
of ACTN interfaces can be found in [ACTN-YANG]. of ACTN interfaces can be found in [ACTN-YANG]. Therefore, it
considers the TE Topology YANG model defined in [TE-TOPO], with the
OTN Topology augmentation defined in [OTN-TOPO] and the TE Tunnel
YANG model defined in [TE-TUNNEL], with the OTN Tunnel augmentation
defined in [OTN-TUNNEL].
The analysis of how to use the attributes in the I2RS Topology YANG
model, defined in [I2RS-TOPO], is for further study.
The ONF Technical Recommendations for Functional Requirements for The ONF Technical Recommendations for Functional Requirements for
the transport API in [ONF TR-527] and the ONF transport API multi- the transport API in [ONF TR-527] and the ONF transport API multi-
layer examples in [ONF GitHub] have been considered as an input for domain examples in [ONF GitHub] have been considered as an input for
this work. defining the reference scenarios analyzed in this document.
Considerations about the CMI (interface between the Customer Network 1.2. Assumptions
Controller (CNC) and the MDSC) are outside the scope of this
document. This document is making the following assumptions, still to be
validated with TEAS WG:
1. The MDSC can request, at the MPI, a PNC to setup a Transit Tunnel
Segment using the TE Tunnel YANG model: in this case, since the
endpoints of the E2E Tunnel are outside the domain controlled by
that PNC, the MDSC would not specify any source or destination
TTP (i.e., it would leave the source, destination, src-tp-id and
dst-tp-id attributes empty) and it would use the explicit-route-
object list to specify the ingress and egress links of the
Transit Tunnel Segment.
2. Each PNC provides to the MDSC, at the MPI, the list of available
timeslots on the inter-domain links using the TE Topology YANG
model and OTN Topology augmentation. The TE Topology YANG model
in [TE-TOPO] is being updated to report the label set
information.
This document is also making the following assumptions, still to be
validated with CCAMP WG:
2. Terminology 2. Terminology
Domain: defined as a collection of network elements within a common
realm of address space or path computation responsibility [RFC5151]
E-LINE: Ethernet Line E-LINE: Ethernet Line
EPL: Ethernet Private Line EPL: Ethernet Private Line
EVPL: Ethernet Virtual Private Line EVPL: Ethernet Virtual Private Line
OTH: Optical Network Hierarchy
OTN: Optical Transport Network OTN: Optical Transport Network
Service: A service in the context of this document can be considered
as some form of connectivity between customer sites across the
network operator's network [RFC8309]
Service Model: As described in [RFC8309] it describes a service and
the parameters of the service in a portable way that can be used
uniformly and independent of the equipment and operating
environment.
UNI: User Network Interface
MDSC: Multi-Domain Service Coordinator
CNC: Customer Network Controller
PNC: Provisioning Network Controller
MAC Bridging: Virtual LANs (VLANs) on IEEE 802.3 Ethernet network
3. Conventions used in this document 3. Conventions used in this document
3.1. Topology and traffic flow processing 3.1. Topology and traffic flow processing
The traffic flow between different nodes is specified as an ordered The traffic flow between different nodes is specified as an ordered
list of nodes, separated with commas, indicating within the brackets list of nodes, separated with commas, indicating within the brackets
the processing within each node: the processing within each node:
<node> (<processing>) {, <node> (<processing>)} <node> (<processing>){, <node> (<processing>)}
The order represents the order of traffic flow being forwarded The order represents the order of traffic flow being forwarded
through the network. through the network.
The processing can be either an adaptation of a client layer into a The processing can be either an adaptation of a client layer into a
server layer "(client -> server)" or switching at a given layer server layer "(client -> server)" or switching at a given layer
"([switching])". Multi-layer switching is indicated by two layer "([switching])". Multi-layer switching is indicated by two layer
switching with client/server adaptation: "([client] -> [server])". switching with client/server adaptation: "([client] -> [server])".
For example, the following traffic flow: For example, the following traffic flow:
C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]), C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]),
C-R3 (ODU2 -> [PKT]) C-R3 (ODU2 -> [PKT])
Node C-R1 is switching at the packet (PKT) layer and mapping packets Node C-R1 is switching at the packet (PKT) layer and mapping packets
into a ODU2 before transmission to node S3. Nodes S3, S5 and S6 are into an ODU2 before transmission to node S3. Nodes S3, S5 and S6 are
switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which
then sends it to S6 which finally sends to C-R3. Node C-R3 then sends it to S6 which finally sends to C-R3. Node C-R3
terminates the ODU2 from S6 before switching at the packet (PKT) terminates the ODU2 from S6 before switching at the packet (PKT)
layer. layer.
The paths of working and protection transport entities are specified The paths of working and protection transport entities are specified
as an ordered list of nodes, separated with commas: as an ordered list of nodes, separated with commas:
<node> {, <node>} <node> {, <node>}
The order represents the order of traffic flow being forwarded The order represents the order of traffic flow being forwarded
through the network in the forward direction. In case of through the network in the forward direction. In case of
bidirectional paths, the forward and backward directions are bidirectional paths, the forward and backward directions are
selected arbitrarily, but the convention is consistent between selected arbitrarily, but the convention is consistent between
working/protection path pairs as well as across multiple domains. working/protection path pairs as well as across multiple domains.
4. Use Case 1: Single-domain with single-layer 3.2. JSON code
4.1. Reference Network This document provides some detailed JSON code examples to describe
how the YANG models being developed by IETF (TEAS and CCAMP WG in
particular) can be used.
The current considerations discussed in this document are based on The examples are provided using JSON because JSON code is easier for
the following reference networks: humans to read and write.
- single transport domain: OTN network Different objects need to have an identifier. The convention used to
create mnemonic identifiers is to use the object name (e.g., S3 for
node S3), followed by its type (e.g., NODE), separated by an "-",
followed by "-ID". For example, the mnemonic identifier for node S3
would be S3-NODE-ID.
4.1.1. Single Transport Domain - OTN Network JSON language does not support the insertion of comments that have
been instead found to be useful when writing the examples. This
document inserts comments into the JSON code as JSON name/value pair
with the JSON name string starting with the "//" characters. For
example, when describing the example of a TE Topology instance
representing the ODU Abstract Topology exposed by the Transport PNC,
the following comment has been added to the JSON code:
As shown in Figure 1 the network physical topology composed of a "// comment": "ODU Abstract Topology @ MPI",
single-domain transport network providing transport services to an
IP network through five access links.
................................................ The JSON code examples provided in this document have been validated
: IP domain : against the YANG models following the validation process described
: .............................. : in Appendix B, which would not consider the comments.
: : ........................ : :
: : : : : :
: : : S1 -------- S2 ------ C-R4 :
: : : / | : : :
: : : / | : : :
: C-R1 ------ S3 ----- S4 | : : :
: : : \ \ | : : :
: : : \ \ | : : :
: : : S5 \ | : : :
: C-R2 -----+ / \ \ | : : :
: : : \ / \ \ | : : :
: : : S6 ---- S7 ---- S8 ------ C-R5 :
: : : / : : :
: C-R3 -----+ : : :
: : : Transport domain : : :
: : : : : :
:........: :......................: :........:
Figure 1 Reference network for Use Case 1
The IP and transport (OTN) domains are respectively composed by five In order to have successful validation of the examples, some
routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The numbering scheme has been defined to assign identifiers to the
transport domain acts as a transit network providing connectivity different entities which would pass the syntax checks. In that case,
for IP layer services. to simplify the reading, another JSON name/value pair, formatted as
a comment and using the mnemonic identifiers is also provided. For
example, the identifier of node S3 (S3-NODE-ID) has been assumed to
be "10.0.0.3" and would be shown in the JSON code example using the
two JSON name/value pair:
The behavior of the transport domain is the same whether the "// te-node-id": "S3-NODE-ID",
ingress or egress service nodes in the IP domain are only attached
to the transport domain, or if there are other routers in between
the ingress or egress nodes of the IP domain not also attached to
the transport domain. In other words, the behavior of the transport
network does not depend on whether C-R1, C-R2, ..., C-R5 are PE or P
routers for the IP services.
The transport domain control plane architecture follows the ACTN "te-node-id": "10.0.0.3",
architecture and framework document [ACTN-Frame], and functional
components:
o Customer Network Controller (CNC) act as a client with respect to The first JSON name/value pair will be automatically removed in the
the Multi-Domain Service Coordinator (MDSC) via the CNC-MDSC first step of the validation process while the second JSON
Interface (CMI); name/value pair will be validate against the YANG model definitions.
o MDSC is connected to a plurality of Physical Network Controllers 4. Scenarios Description
(PNCs), one for each domain, via a MDSC-PNC Interface (MPI). Each
PNC is responsible only for the control of its domain and the 4.1. Reference Network
MDSC is the only entity capable of multi-domain functionalities
as well as of managing the inter-domain links; The physical topology of the reference network is shown in Figure 1.
It represents an OTN network composed of three transport network
domains providing transport services to an IP customer network
through eight access links:
........................
.......... : :
: : Network domain 1 : .............
Customer: : : : :
domain : : S1 -------+ : : Network :
: : / \ : : domain 3 : ..........
C-R1 ------- S3 ----- S4 \ : : : :
: : \ \ S2 --------+ : :Customer
: : \ \ | : : \ : : domain
: : S5 \ | : : \ : :
C-R2 ------+ / \ \ | : : S31 --------- C-R7
: : \ / \ \ | : : / \ : :
: : S6 ---- S7 ---- S8 ------ S32 S33 ------ C-R8
: : / | | : : / \ / : :.......
C-R3 ------+ | | : :/ S34 : :
: :..........|.......|...: / / : :
........: | | /:.../.......: :
| | / / :
...........|.......|..../..../... :
: | | / / : ..............
: Network | | / / : :
: domain 2 | | / / : :Customer
: S11 ---- S12 / : : domain
: / | \ / : :
: S13 S14 | S15 ------------- C-R4
: | \ / \ | \ : :
: | S16 \ | \ : :
: | / S17 -- S18 --------- C-R5
: | / \ / : :
: S19 ---- S20 ---- S21 ------------ C-R6
: : :
:...............................: :.............
Figure 1 Reference network
The transport domain control architecture, shown in Figure 2,
follows the ACTN architecture and framework document [ACTN-Frame],
and functional components:
--------------
| |
| CNC |
| |
--------------
|
....................|....................... CMI
|
----------------
| |
| MDSC |
| |
----------------
/ | \
/ | \
............../.....|......\................ MPIs
/ | \
/ ---------- \
/ | PNC2 | \
/ ---------- \
---------- | \
| PNC1 | ----- \
---------- ( ) ----------
| ( ) | PNC3 |
----- ( Network ) ----------
( ) ( Domain 2 ) |
( ) ( ) -----
( Network ) ( ) ( )
( Domain 1 ) ----- ( )
( ) ( Network )
( ) ( Domain 3 )
----- ( )
( )
-----
Figure 2 Controlling Hierarchy
The ACTN framework facilitates the detachment of the network and The ACTN framework facilitates the detachment of the network and
service control from the underlying technology and help the customer service control from the underlying technology and help the customer
express the network as desired by business needs. Therefore, care express the network as desired by business needs. Therefore, care
must be taken to keep minimal dependency on the CMI (or no must be taken to keep minimal dependency on the CMI (or no
dependency at all) with respect to the network domain technologies. dependency at all) with respect to the network domain technologies.
The MPI instead requires some specialization according to the domain The MPI instead requires some specialization according to the domain
technology. technology.
+-----+ In this document we address the use case where the CNC controls: the
| CNC | customer IP network and requests, at the CMI, transport connectivity
+-----+ among IP routers to an MDSC which coordinates, via three MPIs, the
| control of a multi-domain transport network via three PNCs.
|CMI I/F
|
+-----------------------+
| MDSC |
+-----------------------+
|
|MPI I/F
|
+-------+
| PNC |
+-------+
|
-----
( )
( OTN )
( Physical )
( Network )
( )
-----
Figure 2 Controlling Hierarchy for Use Case 1 The interfaces within scope of this document are the three MPIs,
while the interface between the CNC and the IP routers is out of
scope of this document. It is also assumed that the CMI allows the
CNC to provide all the information that is required by the MDSC to
properly configure the transport connectivity requested by the
customer.
Once the service request is processed by the MDSC the mapping of the 4.1.1. Single-Domain Scenario
client IP traffic between the routers (across the transport network)
is made in the IP routers only and is not controlled by the In case the CNC requests transport connectivity between IP routers
transport PNC, and therefore transparent to the transport nodes. attached to the same transport domain (e.g., between C-R1 and C-R3),
the MDSC can pass the service request to the PNC (e.g., PNC1) and
let the PNC takes decisions about how to implement the service.
4.1.2. Multi-Domain Scenario
In case the CNC requests transport connectivity between IP routers
attached to different transport domain (e.g., between C-R1 and C-
R5), the MDSC can split the service request into tunnel segment
configuration and then pass to multiple PNCs (PNC1 and PNC2 in this
example) and let the PNC takes decisions about how to deploy the
service.
4.2. Topology Abstractions 4.2. Topology Abstractions
Abstraction provides a selective method for representing Abstraction provides a selective method for representing
connectivity information within a domain. There are multiple methods connectivity information within a domain. There are multiple methods
to abstract a network topology. This document assumes the to abstract a network topology. This document assumes the
abstraction method defined in [RFC7926]: abstraction method defined in [RFC7926]:
"Abstraction is the process of applying policy to the available TE "Abstraction is the process of applying policy to the available TE
information within a domain, to produce selective information that information within a domain, to produce selective information that
represents the potential ability to connect across the domain. represents the potential ability to connect across the domain.
Thus, abstraction does not necessarily offer all possible Thus, abstraction does not necessarily offer all possible
connectivity options, but presents a general view of potential connectivity options, but presents a general view of potential
connectivity according to the policies that determine how the connectivity according to the policies that determine how the
domain's administrator wants to allow the domain resources to be domain's administrator wants to allow the domain resources to be
used." used."
[TE-Topo] describes a YANG base model for TE topology without any [TE-Topo] Describes a YANG base model for TE topology without any
technology specific parameters. Moreover, it defines how to abstract technology specific parameters. Moreover, it defines how to abstract
for TE-network topologies. for TE-network topologies.
[ACTN-Frame] provides the context of topology abstraction in the [ACTN-Frame] Provides the context of topology abstraction in the
ACTN architecture and discusses a few alternatives for the ACTN architecture and discusses a few alternatives for the
abstraction methods for both packet and optical networks. This is an abstraction methods for both packet and optical networks. This is an
important consideration since the choice of the abstraction method important consideration since the choice of the abstraction method
impacts protocol design and the information it carries. According impacts protocol design and the information it carries. According
to [ACTN-Frame], there are three types of topology: to [ACTN-Frame], there are three types of topology:
o White topology: This is a case where the Physical Network o White topology: This is a case where the PNC provides the actual
Controller (PNC) provides the actual network topology to the network topology to the MDSC without any hiding or filtering. In
multi-domain Service Coordinator (MDSC) without any hiding or this case, the MDSC has the full knowledge of the underlying
filtering. In this case, the MDSC has the full knowledge of the network topology;
underlying network topology;
o Black topology: The entire domain network is abstracted as a o Black topology: The entire domain network is abstracted as a
single virtual node with the access/egress links without single virtual node with the access/egress links without
disclosing any node internal connectivity information; disclosing any node internal connectivity information;
o Grey topology: This abstraction level is between black topology o Grey topology: This abstraction level is between black topology
and white topology from a granularity point of view. This is and white topology from a granularity point of view. This is
abstraction of TE tunnels for all pairs of border nodes. We may abstraction of TE tunnels for all pairs of border nodes. We may
further differentiate from a perspective of how to abstract further differentiate from a perspective of how to abstract
internal TE resources between the pairs of border nodes: internal TE resources between the pairs of border nodes:
- Grey topology type A: border nodes with a TE links between - Grey topology type A: border nodes with a TE links between
them in a full mesh fashion; them in a full mesh fashion;
- Grey topology type B: border nodes with some internal - Grey topology type B: border nodes with some internal
abstracted nodes and abstracted links. abstracted nodes and abstracted links.
For single-domain with single-layer use-case, the white topology may Each PNC should provide the MDSC a topology abstraction of the
be disseminated from the PNC to the MDSC in most cases. There may be domain's network topology.
some exception to this in the case where the underlay network may
have complex optical parameters, which do not warrant the
distribution of such details to the MDSC. In such case, the topology
disseminated from the PNC to the MDSC may not have the entire TE
information but a streamlined TE information. This case would incur
another action from the MDSC's standpoint when provisioning a path.
The MDSC may make a path compute request to the PNC to verify the
feasibility of the estimated path before making the final
provisioning request to the PNC, as outlined in [Path-Compute].
Topology abstraction for the CMI is for further study (to be Each PNC provides topology abstraction of its own domain topology
addressed in future revisions of this document). independently from each other and therefore it is possible that
different PNCs provide different types of topology abstractions.
The MPI operates on the abstract topology regardless on the type of
abstraction provided by the PNC.
To analyze how the MPI operates on abstract topologies independently
from the topology abstraction provided by each PNC and, therefore,
that that different PNCs can provide different topology
abstractions, it is assumed that:
o PNC1 provides a topology abstraction which exposes at the MPI an
abstract node and an abstract link for each physical node and
link within network domain 1
o PNC2 provides a topology abstraction which exposes at the MPI a
single abstract node (representing the whole network domain) with
abstract links representing only the inter-domain physical links
o PNC3 provides a topology abstraction which exposes at the MPI two
abstract nodes (AN31 and AN32). They abstract respectively nodes
S31+S33 and nodes S32+S34. At the MPI, only the abstract nodes
should be reported: the mapping between the abstract nodes (AN31
and AN32) and the physical nodes (S31, S32, S33 and S34) should
be done internally by the PNC.
The MDSC should be capable to stitch together each abstracted
topology to build its own view of the multi-domain network topology.
The process may require suitable oversight, including administrative
configuration and trust models, but this is out of scope for this
document.
A method and process for topology abstraction for the CMI is
required, and will be discussed in a future revision of this
document.
4.3. Service Configuration 4.3. Service Configuration
In the following use cases, the Multi Domain Service Coordinator In the following scenarios, it is assumed that the CNC is capable to
(MDSC) needs to be capable to request service connectivity from the request service connectivity from the MDSC to support IP routers
transport Physical Network Controller (PNC) to support IP routers connectivity.
connectivity. The type of services could depend of the type of
physical links (e.g. OTN link, ETH link or SDH link) between the
routers and transport network.
As described in section 4.1.1, the control of different adaptations The type of services could depend of the type of physical links
inside IP routers, C-Ri (PKT -> foo) and C-Rj (foo -> PKT), are (e.g. OTN link, ETH link or SDH link) between the routers and
assumed to be performed by means that are not under the control of, transport network.
and not visible to, transport PNC. Therefore, these mechanisms are
outside the scope of this document. The control of different adaptations inside IP routers, C-Ri (PKT
-> foo) and C-Rj (foo -> PKT), are assumed to be performed by means
that are not under the control of, and not visible to, the MDSC nor
to the PNCs. Therefore, these mechanisms are outside the scope of
this document.
It is just assumed that the CNC is capable to request the proper
configuration of the different adaptation functions inside the
customer's IP routers, by means which are outside the scope of this
document.
4.3.1. ODU Transit 4.3.1. ODU Transit
This use case assumes that the physical links interconnecting the IP The physical links interconnecting the IP routers and the transport
routers and the transport network are OTN links. The network can be OTN links. In this case, the physical/optical
physical/optical interconnection below the ODU layer is supposed to interconnections below the ODU layer are supposed to be pre-
be pre-configured and not exposed at the MPI to the MDSC. configured and not exposed at the MPI to the MDSC.
To setup a 10Gb IP link between C-R1 to C-R3, an ODU2 end-to-end To setup a 10Gb IP link between C-R1 and C-R5, an ODU2 end-to-end
data plane connection needs to be created between C-R1 and C-R3, data plane connection needs be created between C-R1 and C-R5,
crossing transport nodes S3, S5, and S6. crossing transport nodes S3, S1, S2, S31, S33, S34, S15 and S18
which belong to different PNC domains.
The traffic flow between C-R1 and C-R3 can be summarized as: The traffic flow between C-R1 and C-R5 can be summarized as:
C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])
It is assumed that the CNC requests, via the CMI, the setup of an
ODU2 transit service, providing all the information that the MDSC
needs to understand that it shall setup a multi-domain ODU2 segment
connection between nodes S3 and S18.
In case the CNC needs the setup of a 10Gb IP link between C-R1 and
C-R3 (single-domain service request), the traffic flow between C-R1
and C-R3 can be summarized as:
C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]), C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]),
C-R3 (ODU2 -> [PKT]) C-R3 (ODU2 -> [PKT])
The MDSC should be capable via the MPI to request the setup of an Since the CNC is unaware of the transport network domains, it
ODU2 transit service with enough information that enable the requests the setup of an ODU2 transit service in the same way as
transport PNC to instantiate and control the ODU2 data plane before, regardless the fact the fact that this is a single-domain
connection segment through nodes S3, S5, S6. service.
It is assumed that the information provided at the CMI is sufficient
for the MDSC to understand that this is a single-domain service
request.
The MDSC can then just request PNC1 to setup a single-domain ODU2
data plane segment connection between nodes S3 and S6.
4.3.2. EPL over ODU 4.3.2. EPL over ODU
This use case assumes that the physical links interconnecting the IP The physical links interconnecting the IP routers and the transport
routers and the transport network are Ethernet links. network can be Ethernet links.
In order to setup a 10Gb IP link between C-R1 to C-R3, an EPL To setup a 10Gb IP link between C-R1 and C-R5, an EPL service needs
service needs to be created between C-R1 and C-R3, supported by an to be created between C-R1 and C-R5, supported by an ODU2 end-to-end
ODU2 end-to-end connection between S3 and S6, crossing transport data plane connection between transport nodes S3 and S18, crossing
node S5. transport nodes S1, S2, S31, S33, S34 and S15 which belong to
different PNC domains.
The traffic flow between C-R1 and C-R3 can be summarized as: The traffic flow between C-R1 and C-R5 can be summarized as:
C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]),
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT])
It is assumed that the CNC requests, via the CMI, the setup of an
EPL service, providing all the information that the MDSC needs to
understand that it shall coordinate the three PNCs to setup a multi-
domain ODU2 end-to-end connection between nodes S3 and S18 as well
as the configuration of the adaptation functions inside nodes S3 and
S18: S3 (ETH -> [ODU2]), S18 ([ODU2] -> ETH), S18 (ETH -> [ODU2])
and S3 ([ODU2] -> ETH).
In case the CNC needs the setup of a 10Gb IP link between C-R1 and
C-R3 (single-domain service request), the traffic flow between C-R1
and C-R3 can be summarized as:
C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S5 ([ODU2]), C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S5 ([ODU2]),
S6 ([ODU2] -> ETH), C-R3 (ETH-> [PKT]) S6 ([ODU2] -> ETH), C-R3 (ETH-> [PKT])
The MDSC should be capable via the MPI to request the setup of an As described in section 4.3.1, the CNC requests the setup of an EPL
EPL service with enough information that can permit the transport service in the same way as before and the information provided at
PNC to instantiate and control the ODU2 end-to-end data plane the CMI is sufficient for the MDSC to understand that this is a
connection through nodes S3, S5, S6, as well as the adaptation single-domain service request.
functions inside S3 and S6: S3&S6 (ETH -> ODU2) and S9&S6 (ODU2 ->
ETH).
4.3.3. Other OTN Client Services The MDSC can then just request PNC1 to setup a single-domain EPL
service between nodes S3 and S6. PNC1 can take care of setting up
the single-domain ODU2 end-to-end connection between nodes S3 and S6
as well as of configuring the adaptation functions on these edge
nodes.
[ITU-T G.709-2016] defines mappings of different client layers into 4.3.3. Other OTN Clients Services
[ITU-T G.709] defines mappings of different client layers into
ODU. Most of them are used to provide Private Line services over ODU. Most of them are used to provide Private Line services over
an OTN transport network supporting a variety of types of physical an OTN transport network supporting a variety of types of physical
access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand, access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand,
etc.). etc.).
This use case assumes that the physical links interconnecting the IP The physical links interconnecting the IP routers and the transport
routers and the transport network are any one of these possible network can be any of these types.
options.
In order to setup a 10Gb IP link between C-R1 to C-R3 using, for In order to setup a 10Gb IP link between C-R1 and C-R5 using, for
example STM-64 physical links between the IP routers and the example SDH physical links between the IP routers and the transport
transport network, an STM-64 Private Line service needs to be network, an STM-64 Private Line service needs to be created between
created between C-R1 and C-R3, supported by an ODU2 end-to-end data C-R1 and C-R5, supported by ODU2 end-to-end data plane connection
plane connection between S3 and S6, crossing transport node S5. between transport nodes S3 and S18, crossing transport nodes S1, S2,
S31, S33, S34 and S15 which belong to different PNC domains.
The traffic flow between C-R1 and C-R3 can be summarized as: The traffic flow between C-R1 and C-R5 can be summarized as:
C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]),
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2] -> STM-64), C-R5 (STM-64 -> [PKT])
As described in section 4.3.2, it is assumed that the CNC is
capable, via the CMI, to request the setup of an STM-64 Private Line
service, providing all the information that the MDSC needs to
coordinate the setup of a multi-domain ODU2 connection as well as
the adaptation functions on the edge nodes.
In the single-domain case (10Gb IP link between C-R1 and C-R3), the
traffic flow between C-R1 and C-R3 can be summarized as:
C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 ([ODU2]), C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 ([ODU2]),
S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT])
The MDSC should be capable via the MPI to request the setup of an As described in section 4.3.1, the CNC requests the setup of an STM-
STM-64 Private Line service with enough information that can permit 64 Private Line service in the same way as before and the
the transport PNC to instantiate and control the ODU2 end-to-end information provided at the CMI is sufficient for the MDSC to
connection through nodes S3, S5, S6, as well as the adaptation understand that this is a single-domain service request.
functions inside S3 and S6: S3&S6 (STM-64 -> ODU2) and S9&S3 (STM-64
-> PKT). As described in section 4.3.2, the MDSC could just request PNC1 to
setup a single-domain STM-64 Private Line service between nodes S3
and S6.
4.3.4. EVPL over ODU 4.3.4. EVPL over ODU
This use case assumes that the physical links interconnecting the IP When the physical links interconnecting the IP routers and the
routers and the transport network are Ethernet links and that transport network are Ethernet links, it is also possible that
different Ethernet services (e.g, EVPL) can share the same physical different Ethernet services (e.g., EVPL) can share the same physical
link using different VLANs. link using different VLANs.
In order to setup two 1Gb IP links between C-R1 to C-R3 and between To setup two 1Gb IP links between C-R1 to C-R3 and between C-R1 and
C-R1 and C-R4, two EVPL services need to be created, supported by C-R5, two EVPL services need to be created, supported by two ODU0
two ODU0 end-to-end connections respectively between S3 and S6, end-to-end connections respectively between S3 and S6, crossing
crossing transport node S5, and between S3 and S2, crossing transport node S5, and between S3 and S18, crossing transport nodes
transport node S1. S1, S2, S31, S33, S34 and S15 which belong to different PNC domains.
Since the two EVPL services are sharing the same Ethernet physical Since the two EVPL services are sharing the same Ethernet physical
link between C-R1 and S3, different VLAN IDs are associated with link between C-R1 and S3, different VLAN IDs are associated with
different EVPL services: for example VLAN IDs 10 and 20 different EVPL services: for example, VLAN IDs 10 and 20
respectively. respectively.
The traffic flow between C-R1 and C-R5 can be summarized as:
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S1 ([ODU0]),
S2 ([ODU0]), S31 ([ODU0]), S33 ([ODU0]), S34 ([ODU0]),
S15 ([ODU0]), S18 ([ODU0] -> VLAN), C-R5 (VLAN -> [PKT])
The traffic flow between C-R1 and C-R3 can be summarized as: The traffic flow between C-R1 and C-R3 can be summarized as:
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S5 ([ODU0]), C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S5 ([ODU0]),
S6 ([ODU0] -> VLAN), C-R3 (VLAN -> [PKT]) S6 ([ODU0] -> VLAN), C-R3 (VLAN -> [PKT])
The traffic flow between C-R1 and C-R4 can be summarized as: As described in section 4.3.2, it is assumed that the CNC is
capable, via the CMI, to request the setup of these EVPL services,
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S1 ([ODU0]), providing all the information that the MDSC needs to understand that
S2 ([ODU0] -> VLAN), C-R4 (VLAN -> [PKT]) it need to request PNC1 to setup an EVPL service between nodes S3
and S6 (single-domain service request) and it also needs to
The MDSC should be capable via the MPI to request the setup of these coordinate the setup of a multi-domain ODU0 connection between nodes
EVPL services with enough information that can permit the transport S3 and S16 as well as the adaptation functions on these edge nodes.
PNC to instantiate and control the ODU0 end-to-end data plane
connections as well as the adaptation functions on the boundary
nodes: S3&S2&S6 (VLAN -> ODU0) and S3&S2&S6 (ODU0 -> VLAN).
4.3.5. EVPLAN and EVPTree Services 4.3.5. EVPLAN and EVPTree Services
This use case assumes that the physical links interconnecting the IP When the physical links interconnecting the IP routers and the
routers and the transport network are Ethernet links and that transport network are Ethernet links, multipoint Ethernet services
different Ethernet services (e.g., EVPL, EVPLAN and EVPTree) can (e.g, EPLAN and EPTree) can also be supported. It is also possible
that multiple Ethernet services (e.g, EVPL, EVPLAN and EVPTree)
share the same physical link using different VLANs. share the same physical link using different VLANs.
Note - it is assumed that EPLAN and EPTree services can be supported Note - it is assumed that EPLAN and EPTree services can be supported
by configuring EVPLAN and EVPTree with port mapping. by configuring EVPLAN and EVPTree with port mapping.
In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R4, an
EVPLAN/EVPTree service needs to be created, supported by two ODUflex
end-to-end connections respectively between S3 and S6, crossing
transport node S5, and between S3 and S2, crossing transport node
S1.
In order to support this EVPLAN/EVPTree service, some Ethernet
Bridging capabilities are required on some nodes at the edge of the
transport network: for example Ethernet Bridging capabilities can be
configured in nodes S3 and S6 but not in node S2.
Since this EVPLAN/EVPTree service can share the same Ethernet Since this EVPLAN/EVPTree service can share the same Ethernet
physical links between IP routers and transport nodes (e.g., with physical links between IP routers and transport nodes (e.g., with
the EVPL services described in section 4.3.4), a different VLAN ID the EVPL services described in section 4.3.4), a different VLAN ID
(e.g., 30) can be associated with this EVPLAN/EVPTree service. (e.g., 30) can be associated with this EVPLAN/EVPTree service.
In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R5, an
EVPLAN/EVPTree service needs to be created, supported by two ODUflex
end-to-end connections respectively between S3 and S6, crossing
transport node S5, and between S3 and S18, crossing transport nodes
S1, S2, S31, S33, S34 and S15 which belong to different PNC domains.
Some MAC Bridging capabilities are also required on some nodes at
the edge of the transport network: for example Ethernet Bridging
capabilities can be configured in nodes S3 and S6:
o MAC Bridging in node S3 is needed to select, based on the MAC
Destination Address, whether received Ethernet frames should be
forwarded to C-R1 or to the ODUflex terminating on node S6 or to
the other ODUflex terminating on node S18;
o MAC bridging function in node S6 is needed to select, based on
the MAC Destination Address, whether received Ethernet frames
should be sent to C-R2 or to C-R3 or to the ODUflex terminating
on node S3.
In order to support an EVPTree service instead of an EVPLAN, In order to support an EVPTree service instead of an EVPLAN,
additional configuration of the Ethernet Bridging capabilities on additional configuration of the Ethernet Bridging capabilities on
the nodes at the edge of the transport network is required. the nodes at the edge of the transport network is required.
The MAC bridging function in node S3 is needed to select, based on The traffic flows between C-R1 and C-R3, between C-R3 and C-R5 and
the MAC Destination Address, whether the Ethernet frames form C-R1 between C-R1 and C-R5 can be summarized as:
should be sent to the ODUflex terminating on node S6 or to the other
ODUflex terminating on node S2.
The MAC bridging function in node S6 is needed to select, based on
the MAC Destination Address, whether the Ethernet frames received
from the ODUflex should be set to C-R2 or C-R3, as well as whether
the Ethernet frames received from C-R2 (or C-R3) should be sent to
C-R3 (or C-R2) or to the ODUflex.
For example, the traffic flow between C-R1 and C-R3 can be
summarized as:
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]),
S5 ([ODUflex]), S6 ([ODUflex] -> [MAC] -> VLAN), S5 ([ODUflex]), S6 ([ODUflex] -> [MAC] -> VLAN),
C-R3 (VLAN -> [PKT]) C-R3 (VLAN -> [PKT])
The MAC bridging function in node S3 is also needed to select, based
on the MAC Destination Address, whether the Ethernet frames one
ODUflex should be sent to C-R1 or to the other ODUflex.
For example, the traffic flow between C-R3 and C-R4 can be
summarized as:
C-R3 ([PKT] -> VLAN), S6 (VLAN -> [MAC] -> [ODUflex]), C-R3 ([PKT] -> VLAN), S6 (VLAN -> [MAC] -> [ODUflex]),
S5 ([ODUflex]), S3 ([ODUflex] -> [MAC] -> [ODUflex]), S5 ([ODUflex]), S3 ([ODUflex] -> [MAC] -> [ODUflex]),
S1 ([ODUflex]), S2 ([ODUflex] -> VLAN), C-R4 (VLAN -> [PKT]) S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]),
S33 ([ODUflex]), S34 ([ODUflex]),
S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R5 (VLAN -> [PKT])
In node S2 there is no need for any MAC bridging function since all C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]),
the Ethernet frames received from C-R4 should be sent to the ODUflex S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]),
toward S3 and viceversa. S33 ([ODUflex]), S34 ([ODUflex]),
S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R5 (VLAN -> [PKT])
The traffic flow between C-R1 and C-R4 can be summarized as: As described in section 4.3.2, it is assumed that the CNC is
capable, via the CMI, to request the setup of this EVPLAN/EVPTree
service, providing all the information that the MDSC needs to
understand that it need to request PNC1 to setup an ODUflex
connection between nodes S3 and S6 (single-domain service request)
and it also needs to coordinate the setup of a multi-domain ODUflex
connection between nodes S3 and S16 as well as the MAC bridging and
the adaptation functions on these edge nodes.
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), In case the CNC needs the setup of an EVPLAN/EVPTree service only
S1 ([ODUflex]), S2 ([ODUflex] -> VLAN), C-R4 (VLAN -> [PKT]) between C-R1, C-R2 and C-R3 (single-domain service request), it
would request the setup of this service in the same way as before
and the information provided at the CMI is sufficient for the MDSC
to understand that this is a single-domain service request.
The MDSC should be capable via the MPI to request the setup of this The MDSC can then just request PNC1 to setup a single-domain
EVPLAN/EVPTree services with enough information that can permit the EVPLAN/EVPTree service between nodes S3 and S6. PNC1 can take care
transport PNC to instantiate and control the ODUflex end-to-end data of setting up the single-domain ODUflex end-to-end connection
plane connections as well as the Ethernet Bridging and adaptation between nodes S3 and S6 as well as of configuring the MAC bridging
functions on the boundary nodes: S3&S6 (VLAN -> MAC -> ODU2), S3&S6 and the adaptation functions on these edge nodes.
(ODU2 -> ETH -> VLAN), S2 (VLAN -> ODU2) and S2 (ODU2 -> VLAN).
4.4. Multi-functional Access Links 4.3.6. Dynamic Service Configuration
This use case assumes that some physical links interconnecting the Given the service established in the previous sections, there is a
IP routers and the transport network can be configured in different demand for an update of some service characteristics. A
modes, e.g., as OTU2 or STM-64 or 10GE. straightforward approach would be terminate the current service and
replace with a new one. Another more advanced approach would be
dynamic configuration, in which case there will be no interruption
for the connection.
An example application would be updating the SLA information for a
certain connection. For example, an ODU transit connection is set up
according to section 4.3.1, with the corresponding SLA level of 'no
protection'. After the establishment of this connection, the user
would like to enhance this service by providing a restoration after
potential failure, and a request is generated on the CMI. In this
case, after receiving the request, the MDSC would need to send an
update message to the PNC, changing the SLA parameters in TE Tunnel
model. Then the connection characteristic would be changed by PNC,
and a notification would be sent to MDSC for acknowledgement.
4.4. Multi-function Access Links
Some physical links interconnecting the IP routers and the transport
network can be configured in different modes, e.g., as OTU2 or STM-
64 or 10GE.
This configuration can be done a-priori by means outside the scope This configuration can be done a-priori by means outside the scope
of this document. In this case, these links will appear at the MPI of this document. In this case, these links will appear at the MPI
either as an ODU Link or as an STM-64 Link or as a 10GE Link either as an ODU Link or as a STM-64 Link or as a 10GE Link
(depending on the a-priori configuration) and will be controlled at (depending on the a-priori configuration) and will be controlled at
the MPI as discussed in section 4.3. the MPI as discussed in section 4.3.
It is also possible not to configure these links a-priori and give It is also possible not to configure these links a-priori and give
the control to the MPI to decide, based on the service the control to the MPI to decide, based on the service
configuration, how to configure it. configuration, how to configure it.
For example, if the physical link between C-R1 and S3 is a multi- For example, if the physical link between C-R1 and S3 is a multi-
functional access link while the physical links between C-R3 and S6 functional access link while the physical links between C-R7 and S31
and between C-R4 and S2 are STM-64 and 10GE physical links and between C-R5 and S18 are STM-64 and 10GE physical links
respectively, it is possible at the MPI to configure either an STM- respectively, it is possible to configure either an STM-64 Private
64 Private Line service between C-R1 and C-R3 or an EPL service Line service between C-R1 and C-R7 or an EPL service between C-R1
between C-R1 and C-R4. and C-R5.
The traffic flow between C-R1 and C-R3 can be summarized as: The traffic flow between C-R1 and C-R7 can be summarized as:
C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 ([ODU2]), C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]),
S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) S2 ([ODU2]), S31 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT])
The traffic flow between C-R1 and C-R4 can be summarized as: The traffic flow between C-R1 and C-R5 can be summarized as:
C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]),
S2 ([ODU2] -> ETH), C-R4 (ETH-> [PKT]) S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT])
The MDSC should be capable via the MPI to request the setup of As described in section 4.3.2, it is assumed that the CNC is
either service with enough information that can permit the transport capable, via the CMI, to request the setup either an STM-64 Private
PNC to instantiate and control the ODU2 end-to-end data plane Line service between C-R1 and C-R7 or an EPL service between C-R1
connection as well as the adaptation functions inside S3 and S2 or and C-R5, providing all the information that the MDSC needs to
S6. understand that it need to coordinate the setup of a multi-domain
ODU2 connection, either between nodes S3 and S31, or between nodes
S3 and S18, as well as the adaptation functions on these edge nodes,
and in particular whether the multi-function access link on between
C-R1 and S3 should operate as an STM-64 or as a 10GE link.
4.5. Protection Requirements 4.5. Protection and Restoration Configuration
Protection switching provides a pre-allocated survivability Protection switching provides a pre-allocated survivability
mechanism, typically provided via linear protection methods and mechanism, typically provided via linear protection methods and
would be configured to operate as 1+1 unidirectional (the most would be configured to operate as 1+1 unidirectional (the most
common OTN protection method), 1+1 bidirectional or 1:n common OTN protection method), 1+1 bidirectional or 1:n
bidirectional. This ensures fast and simple service survivability. bidirectional. This ensures fast and simple service survivability.
The MDSC needs to be capable to request the transport PNC to Restoration methods would provide capability to reroute and restore
configure protection when requesting the setup of the connectivity connectivity traffic around network faults, without the network
services described in section 4.3. penalty imposed with dedicated 1+1 protection schemes.
Since in this use case it is assumed that switching within the This section describes only services which are protected with linear
transport network domain is performed only in one layer, also protection and with dynamic restoration.
protection switching within the transport network domain can only be
provided at the OTN ODU layer, for all the services defined in
section 4.3.
It may be necessary to consider not only protection, but also The MDSC needs to be capable to coordinate different PNCs to
restoration functions in the future. Restoration methods would configure protection switching when requesting the setup of the
provide capability to reroute and restore connectivity traffic protected connectivity services described in section 4.3.
around network faults, without the network penalty imposed with
dedicated 1+1 protection schemes.
4.5.1. Linear Protection Since in these service examples, switching within the transport
network domain is performed only in the OTN ODU layer, also
protection switching within the transport network domain can only be
provided at the OTN ODU layer.
It is possible to protect any service defined in section 4.3 from 4.5.1. Linear Protection (end-to-end)
failures within the OTN transport domain by configuring OTN linear
protection in the data plane between node S3 and node S6. In order to protect any service defined in section 4.3 from failures
within the OTN multi-domain transport network, the MDSC should be
capable to coordinate different PNCs to configure and control OTN
linear protection in the data plane between nodes S3 and node S18.
It is assumed that the OTN linear protection is configured to with It is assumed that the OTN linear protection is configured to with
1+1 unidirectional protection switching type, as defined in [ITU-T 1+1 unidirectional protection switching type, as defined in [ITU-T
G.808.1-2014] and [ITU-T G.873.1-2014], as well as in [RFC4427]. G.808.1] and [ITU-T G.873.1], as well as in [RFC4427].
In these scenarios, a working transport entity and a protection In these scenarios, a working transport entity and a protection
transport entity, as defined in [ITU-T G.808.1-2014], (or a working transport entity, as defined in [ITU-T G.808.1], (or a working LSP
LSP and a protection LSP, as defined in [RFC4427]) should be and a protection LSP, as defined in [RFC4427]) should be configured
configured in the data plane, for example: in the data plane.
Working transport entity: S3, S5, S6 Two cases can be considered:
Protection transport entity: S3, S4, S8, S7, S6 o In one case, the working and protection transport entities pass
through the same PNC domains:
The Transport PNC should be capable to report to the MDSC which is Working transport entity: S3, S1, S2,
the active transport entity, as defined in [ITU-T G.808.1-2014], in S31, S33, S34,
the data plane. S15, S18
Protection transport entity: S3, S4, S8,
S32,
S12, S17, S18
o In another case, the working and protection transport entities
can pass through different PNC domains:
Working transport entity: S3, S5, S7,
S11, S12, S17, S18
Protection transport entity: S3, S1, S2,
S31, S33, S34,
S15, S18
The PNCs should be capable to report to the MDSC which is the active
transport entity, as defined in [ITU-T G.808.1], in the data plane.
Given the fast dynamic of protection switching operations in the Given the fast dynamic of protection switching operations in the
data plane (50ms recovery time), this reporting is not expected to data plane (50ms recovery time), this reporting is not expected to
be in real-time. be in real-time.
It is also worth noting that with unidirectional protection It is also worth noting that with unidirectional protection
switching, e.g., 1+1 unidirectional protection switching, the active switching, e.g., 1+1 unidirectional protection switching, the active
transport entity may be different in the two directions. transport entity may be different in the two directions.
5. Use Case 2: Single-domain with multi-layer 4.5.2. Segmented Protection
5.1. Reference Network To protect any service defined in section 4.3 from failures within
the OTN multi-domain transport network, the MDSC should be capable
to request each PNC to configure OTN intra-domain protection when
requesting the setup of the ODU2 data plane connection segment.
The current considerations discussed in this document are based on If PNC1 provides linear protection, the working and protection
the following reference network: transport entities could be:
- single transport domain: OTN and OCh multi-layer network Working transport entity: S3, S1, S2
In this use case, the same reference network shown in Figure 1 is Protection transport entity: S3, S4, S8, S2
considered. The only difference is that all the transport nodes are
capable to switch in the ODU as well as in the OCh layer.
All the physical links within the transport network are therefore If PNC2 provides linear protection, the working and protection
assumed to be OCh links. Therefore, with the exception of the access transport entities could be:
links, no ODU internal link exists before an OCh end-to-end data
plane connection is created within the network.
The controlling hierarchy is the same as described in Figure 2. Working transport entity: S15, S18
The interface within the scope of this document is the Transport MPI Protection transport entity: S15, S12, S17, S18
which should be capable to control both the OTN and OCh layers.
5.2. Topology Abstractions If PNC3 provides linear protection, the working and protection
transport entities could be:
A grey topology type B abstraction is assumed: abstract nodes and Working transport entity: S31, S33, S34
links exposed at the MPI corresponds 1:1 with the physical nodes and
links controlled by the PNC but the PNC abstracts/hides at least
some optical parameters to be used within the OCh layer.
5.3. Service Configuration Protection transport entity: S31, S32, S34
The same service scenarios, as described in section 4.3, are also 4.5.3. End-to-End Dynamic restoration
applicable to these use cases with the only difference that end-to-
end OCh data plane connections will need to be setup before ODU data
plane connections.
6. Use Case 3: Multi-domain with single-layer To restore any service defined in section 4.3 from failures within
the OTN multi-domain transport network, the MDSC should be capable
to coordinate different PNCs to configure and control OTN end-to-end
dynamic Restoration in the data plane between nodes S3 and node S18.
For example, the MDSC can request the PNC1, PNC2 and PNC3 to create
a service with no-protection, MDSC set the end-to-end service with
the dynamic restoration.
6.1. Reference Network Working transport entity: S3, S1, S2,
S31, S33, S34,
S15, S18
In this section we focus on a multi-domain reference network with When a link failure between S1 and s2 occurred in network domain 1,
homogeneous technologies: PNC1 does not restore the tunnel and send the alarm notification to
the MDSC, MDSC will perform the end-to-end restoration.
- multiple transport domains: OTN networks Restored transport entity: S3, S4, S8,
S12, S15, S18
Figure 3 shows the network physical topology composed of three 4.5.4. Segmented Dynamic Restoration
transport network domains providing transport services to an IP
customer network through eight access links:
........................ To restore any service defined in section 4.3 from failures within
.......... : : the OTN multi-domain transport network, the MDSC should be capable
: : : Network domain 1 : ............. to coordinate different PNCs to configure and control OTN segmented
:Customer: : : : : dynamic Restoration in the data plane between nodes S3 and node S18.
:domain 1: : S1 -------+ : : Network :
: : : / \ : : domain 3 : ..........
: C-R1 ------- S3 ----- S4 \ : : : : :
: : : \ \ S2 --------+ : :Customer:
: : : \ \ | : : \ : :domain 3:
: : : S5 \ | : : \ : : :
: C-R2 ------+ / \ \ | : : S31 --------- C-R7 :
: : : \ / \ \ | : : / \ : : :
: : : S6 ---- S7 ---- S8 ------ S32 S33 ------ C-R8 :
: : : / | | : : / \ / : :........:
: C-R3 ------+ | | : :/ S34 :
: : :..........|.......|...: / / :
:........: | | /:.../.......:
| | / /
...........|.......|..../..../...
: | | / / : ..........
: Network | | / / : : :
: domain 2 | | / / : :Customer:
: S11 ---- S12 / : :domain 2:
: / | \ / : : :
: S13 S14 | S15 ------------- C-R4 :
: | \ / \ | \ : : :
: | S16 \ | \ : : :
: | / S17 -- S18 --------- C-R5 :
: | / \ / : : :
: S19 ---- S20 ---- S21 ------------ C-R6 :
: : : :
:...............................: :........:
Figure 3 Reference network for Use Case 3 Working transport entity: S3, S1, S2,
S31, S33, S34,
S15, S18
It is worth noting that the network domain 1 is identical to the When a link failure between S1 and s2 occurred in network domain 1,
transport domain shown in Figure 1. PNC1 will restore the tunnel and send the alarm or tunnel update
notification to the MDSC, MDSC will update the restored tunnel.
-------------- Restored transport entity: S3, S4, S8, S2
| Client | S31, S33, S34,
| Controller | S15, S18
--------------
|
....................|.......................
|
----------------
| |
| MDSC |
| |
----------------
/ | \
/ | \
............../.....|......\................
/ | \
/ ---------- \
/ | PNC2 | \
/ ---------- \
---------- | \
| PNC1 | ----- \
---------- ( ) ----------
| ( ) | PNC3 |
----- ( Network ) ----------
( ) ( Domain 2 ) |
( ) ( ) -----
( Network ) ( ) ( )
( Domain 1 ) ----- ( )
( ) ( Network )
( ) ( Domain 3 )
----- ( )
( )
-----
Figure 4 Controlling Hierarchy for Use Case 3 When a link failure between network domain 1 and network domain 2
occurred, PNC1 and PNC2 will send the alarm notification to the
MDSC, MDSC will update the restored tunnel.
In this section we address the case where the CNC controls the Restored transport entity: S3, S4, S8,
customer IP network and requests transport connectivity among IP S12, S15, S18
routers, via the CMI, to an MDSC which coordinates, via three MPIs,
the control of a multi-domain transport network through three PNCs.
The interfaces within the scope of this document are the three MPIs In order to improve the efficiency of recovery, the controller can
while the interface between the CNC and the IP routers is out of its establish a recovery path in a concurrent way. When the recovery
scope and considerations about the CMI are outside the scope of this fails in one domain or one network element, the rollback operation
document. should be supported.
6.2. Topology Abstractions The creation of the recovery path by the controller can use the
method of "make-before-break", in order to reduce the impact of the
recovery operation on the services.
Each PNC should provide the MDSC a topology abstraction of the 4.6. Service Modification and Deletion
domain's network topology.
Each PNC provides topology abstraction of its own domain topology To be discussed in future versions of this document.
independently from each other and therefore it is possible that
different PNCs provide different types of topology abstractions.
As an example, we can assume that: 4.7. Notification
o PNC1 provides a white topology abstraction (likewise use case 1 To realize the topology update, service update and restoration
described in section 4.2) function, following notification type should be supported.
o PNC2 provides a type A grey topology abstraction 1. Object create
o PNC3 provides a type B grey topology abstraction, with two 2. Object delete
abstract nodes (AN31 and AN32). They abstract respectively nodes
S31+S33 and nodes S32+S34. At the MPI, only the abstract nodes
should be reported: the mapping between the abstract nodes (AN31
and AN32) and the physical nodes (S31, S32, S33 and S34) should
be done internally by the PNC.
The MDSC should be capable to glue together these different abstract 3. Object state change
topologies to build its own view of the multi-domain network
topology. This might require proper administrative configuration or
other mechanisms (to be defined/analysed).
6.3. Service Configuration 4. Alarm
In the following use cases, it is assumed that the CNC is capable to Because there are three types of topology abstraction type defined
request service connectivity from the MDSC to support IP routers in section Section 4.2., the notification should also be abstracted.
connectivity. The PNC and MDSC should coordinate together to determine the
notification policy, such as when an intra-domain alarm occurred,
the PNC may not report the alarm but the service state change
notification to the MDSC.
The same service scenarios, as described in section 4.3, are also 4.8. Path Computation with Constraint
application to this use cases with the only difference that the two
IP routers to be interconnected are attached to transport nodes
which belong to different PNCs domains and are under the control of
the CNC.
Likewise, the service scenarios in section 4.3, the type of services It is possible to have constraint during path computation procedure,
could depend of the type of physical links (e.g. OTN link, ETH link typical cases include IRO/XRO and so on. This information is carried
or SDH link) between the customer's routers and the multi-domain in the TE Tunnel model and used when there is a request with
transport network and the configuration of the different adaptations constraint. Consider the example in section 4.3.1, the request can
inside IP routers is performed by means that are outside the scope be a Tunnel from C-R1 to C-R5 with an IRO from S2 to S31, then a
of this document because not under control of and not visible to the qualified feedback would become:
MDSC nor to the PNCs. It is assumed that the CNC is capable to
request the proper configuration of the different adaptation
functions inside the customer's IP routers, by means which are
outside the scope of this document.
It is also assumed that the CNC is capable via the CMI to request C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
the MDSC the setup of these services with enough information that S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
enable the MDSC to coordinate the different PNCs to instantiate and S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])
control the ODU2 data plane connection through nodes S3, S1, S2,
S31, S33, S34, S15 and S18, as well as the adaptation functions
inside nodes S3 and S18, when needed.
As described in section 6.2, the MDSC should have its own view of If the request covers the IRO from S8 to S12, then the above path
the end-to-end network topology and use it for its own path would not be qualified, while a possible computation result may be:
computation to understand that it needs to coordinate with PNC1,
PNC2 and PNC3 the setup and control of a multi-domain ODU2 data
plane connection.
6.3.1. ODU Transit C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
S8 ([ODU2]), S12 ([ODU2]), S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 ->
[PKT])
In order to setup a 10Gb IP link between C-R1 and C-R5, an ODU2 end- Similarly, the XRO can be represented by TE tunnel model as well.
to-end data plane connection needs be created between C-R1 and C-R5,
crossing transport nodes S3, S1, S2, S31, S33, S34, S15 and S18
which belong to different PNC domains.
The traffic flow between C-R1 and C-R5 can be summarized as: When there is a technology specific network (e.g, OTN), the
corresponding technology (OTN) model should also be used to specify
the tunnel information on MPI, with the constraint included in TE
Tunnel model.
C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]), 5. YANG Model Analysis
S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])
6.3.2. EPL over ODU This section provides a high-level overview of how IETF YANG models
can be used at the MPIs, between the MDSC and the PNCs, to support
the scenarios described in section 4.
In order to setup a 10Gb IP link between C-R1 and C-R5, an EPL Section 5.1 describes the different topology abstractions provided
service needs to be created between C-R1 and C-R5, supported by an to the MDSC by each PNC via its own MPI.
ODU2 end-to-end data plane connection between transport nodes S3 and
S18, crossing transport nodes S1, S2, S31, S33, S34 and S15 which
belong to different PNC domains.
The traffic flow between C-R1 and C-R5 can be summarized as: Section 0 describes how the MDSC can coordinate different requests
to different PNCs, via their own MPIs, to setup different services,
as defined in section 4.3.
C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), Section 5.3 describes how the protection scenarios can be deployed,
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), including end-to-end protection and segment protection, for both
S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT]) intra-domain and inter-domain scenario.
6.3.3. Other OTN Client Services 5.1. YANG Models for Topology Abstraction
In order to setup a 10Gb IP link between C-R1 and C-R5 using, for Each PNC reports its respective abstract topology to the MDSC, as
example SDH physical links between the IP routers and the transport described in section 4.1.2.
network, an STM-64 Private Line service needs to be created between
C-R1 and C-R5, supported by ODU2 end-to-end data plane connection
between transport nodes S3 and S18, crossing transport nodes S1, S2,
S31, S33, S34 and S15 which belong to different PNC domains.
The traffic flow between C-R1 and C-R5 can be summarized as: 5.1.1. Domain 1 Topology Abstraction
C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]), PNC1 provides the required topology abstraction to expose at its MPI
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), toward the MDSC (called "MPI1") one TE Topology instance for the ODU
S15 ([ODU2]), S18 ([ODU2] -> STM-64), C-R5 (STM-64 -> [PKT]) layer (called "MPI1 ODU Topology"), containing one TE Node (called
"ODU Node") for each physical node, as shown in Figure 3. below.
6.3.4. EVPL over ODU ..................................
: :
: ODU Abstract Topology @ MPI :
: Gotham City Area :
: Metro Transport Network :
: :
: +----+ +----+ :
: | |S1-1 | |S2-1:
: | S1 |--------| S2 |- - - - -(C-R4)
: +----+ S2-2+----+ :
: S1-2/ |S2-3 :
: S3-2/ Robinson Park | :
: +----+ +----+ | :
: | |3 1| | | :
(C-R1)- - - - -| S3 |---| S4 | | :
:S3-1+----+ +----+ | :
: S3-4 \ \S4-2 | :
: \S5-1 \ | :
: +----+ \ | :
: | | \S8-3| :
: | S5 | \ | :
: +----+ Metro \ |S8-2 :
(C-R2)- - - - - 2/ E \3 Main \ | :
:S6-1 \ /3 a E \1 Ring \| :
: +----+s-n+----+ +----+ :
: | |t d| | | |S8-1:
: | S6 |---| S7 |---| S8 |- - - - -(C-R5)
: +----+4 2+----+3 4+----+ :
: / :
(C-R3)- - - - - :
:S6-2 :
:................................:
In order to setup two 1Gb IP links between C-R1 to C-R3 and between Figure 3 Abstract Topology exposed at MPI1 (MPI1 ODU Topology)
C-R1 and C-R5, two EVPL services need to be created, supported by
two ODU0 end-to-end connections respectively between S3 and S6,
crossing transport node S5, and between S3 and S18, crossing
transport nodes S1, S2, S31, S33, S34 and S15 which belong to
different PNC domains.
The VLAN configuration on the access links is the same as described The ODU Nodes in Figure 3 are using the same names as the physical
in section 4.3.4. nodes to simplify the description of the mapping between the ODU
Nodes exposed by the Transport PNCs at the MPI and the physical
nodes in the data plane. This does not
correspond to the reality of the usage of the topology model, as
described in section 4.3 of [TE-TOPO], in which renaming by the
client it is necessary.
The traffic flow between C-R1 and C-R3 is the same as described in As described in section 4.1.2, it is assumed that the physical links
section 4.3.4. between the physical nodes are pre-configured up to the OTU4 trail
using mechanisms which are outside the scope of this document. PNC1
exports at MPI1 one TE Link (called "ODU Link") for each of these
OTU4 trails.
The traffic flow between C-R1 and C-R5 can be summarized as: 5.1.2. Domain 2 Grey (Type A) Topology Abstraction
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU2]), S1 ([ODU2]), PNC2 provides the required topology abstraction to expose at its MPI
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), towards the MDSC (called "MPI2") only one abstract node (i.e., AN2),
S15 ([ODU2]), S18 ([ODU2] -> VLAN), C-R5 (VLAN -> [PKT]) with only inter-domain and access links, is reported at the MPI2.
6.3.5. EVPLAN and EVPTree Services 5.1.3. Domain 3 Grey (Type B) Topology Abstraction
In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R7, an PNC3 provides the required topology abstraction to expose at its MPI
EVPLAN/EVPTree service needs to be created, supported by two ODUflex towards the MDSC (called "MPI3") only two abstract nodes (i.e., AN31
end-to-end connections respectively between S3 and S6, crossing and AN32), with internal links, inter-domain links and access links.
transport node S5, and between S3 and S18, crossing transport nodes
S1, S2, S31, S33, S34 and S15 which belong to different PNC domains.
The VLAN configuration on the access links is the same as described 5.1.4. Multi-domain Topology Stitching
in section 4.3.5.
The configuration of the Ethernet Bridging capabilities on nodes S3 As assumed in the beginning of this section, MDSC does not have any
and S6 is the same as described in section 4.3.5 while the knowledge of the topologies of each domain until each PNC reports
configuration on node S18 similar to the configuration of node S2 its own abstraction topology, so the MDSC needs to merge together
described in section 4.3.5. the abstract topologies provided by different PNCs, at the MPIs, to
build its own topology view, as described in section 4.3 of [TE-
TOPO].
The traffic flow between C-R1 and C-R3 is the same as described in Given the topologies reported from multiple PNCs, the MDSC need to
section 4.3.5. stitch the multi-domain topology and obtain the full map of
topology. The topology of each domain main be in an abstracted shape
(refer to section 5.2 of [ACTN-Frame] for different level of
abstraction), while the inter-domain link information must be
complete and fully configured by the MDSC.
The traffic flow between C-R1 and C-R5 can be summarized as: The inter-domain link information is reported to the MDSC by the two
PNCs, controlling the two ends of the inter-domain link.
C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), The MDSC needs to understand how to "stitch" together these inter-
S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]), domain links.
S33 ([ODUflex]), S34 ([ODUflex]),
S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R5 (VLAN -> [PKT])
6.4. Multi-functional Access Links One possibility is to use the plug-id information, defined in [TE-
TOPO]: two inter-domain links reporting the same plug-id value can
be merged as a single intra-domain link within any MDSC native
topology. The value of the reported plug-id information can be
either assigned by a central network authority, and configured
within the two PNC domains, or it can be discovered using automatic
discovery mechanisms (e.g., LMP-based, as defined in [RFC6898]).
The same considerations of section 4.4 apply with the only In case the plug-id values are assigned by a central authority, it
difference that the ODU data plane connections could be setup across is under the central authority responsibility to assign unique
multiple PNC domains. values.
For example, if the physical link between C-R1 and S3 is a multi- In case the plug-id values are automatically discovered, the
functional access link while the physical links between C-R7 and S31 information discovered by the automatic discovery mechanisms needs
and between C-R5 and S18 are STM-64 and 10GE physical links to be encoded as a bit string within the plug-id value. This
respectively, it is possible to configure either an STM-64 Private encoding is implementation specific but the encoding rules need to
Line service between C-R1 and C-R7 or an EPL service between C-R1 be consistent across all the PNCs.
and C-R5.
The traffic flow between C-R1 and C-R7 can be summarized as: In case of co-existence within the same network of multiple sources
for the plug-id (e.g., central authority and automatic discovery or
even different automatic discovery mechanisms), it is recommended
that the plug-id namespace is partitioned to avoid that different
sources assign the same plug-id value to different inter-domain
link. The encoding of the plug-id namespace within the plug-id value
is implementation specific but needs to be consistent across all the
PNCs.
C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]), Another possibility is to pre-configure, either in the adjacent PNCs
S2 ([ODU2]), S31 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) or in the MDSC, the association between the inter-domain link
identifiers (topology-id, node-id and tp-id) assigned by the two
adjacent PNCs to the same inter-domain link.
The traffic flow between C-R1 and C-R5 can be summarized as: This last scenario requires further investigation and will be
discussed in a future version of this document.
C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), 5.1.5. Access Links
S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT])
6.5. Protection Scenarios Access links in Figure 3. are shown as ODU Links: the modeling of
the access links for other access technologies is currently an open
issue.
The MDSC needs to be capable to coordinate different PNCs to The modeling of the access link in case of non-ODU access technology
configure protection switching when requesting the setup of the has also an impact on the need to model ODU TTPs and layer
connectivity services described in section 6.3. transition capabilities on the edge nodes (e.g., nodes S2, S3, S6
and S8 in Figure 3.).
Since in this use case it is assumed that switching within the If, for example, the physical NE S6 is implemented in a "pizza box",
transport network domain is performed only in one layer, also the data plane would have only set of ODU termination resources
protection switching within the transport network domain can only be (where up to 2xODU4, 4xODU3, 20xODU2, 80xODU1, 160xODU0 and
provided at the OTN ODU layer, for all the services defined in 160xODUflex can be terminated). The traffic coming from each of the
section 6.3. 10GE access links can be mapped into any of these ODU terminations.
6.5.1. Linear Protection (end-to-end) Instead if, for example, the physical NE S6 can be implemented as a
multi-board system where access links reside on different/dedicated
access cards with separated set of ODU termination resources (where
up to 1xODU4, 2xODU3, 10xODU2, 40xODU1, 80xODU0 and 80xODUflex for
each resource can be terminated). The traffic coming from one 10GE
access links can be mapped only into the ODU terminations which
reside on the same access card.
In order to protect any service defined in section 6.3 from failures The more generic implementation option for a physical NE (e.g., S6)
within the OTN multi-domain transport network, the MDSC should be would be case is of a multi-board system with multiple access cards
capable to coordinate different PNCs to configure and control OTN with separated sets of access links and ODU termination resources
linear protection in the data plane between nodes S3 and node S18. (where up to 1xODU4, 2xODU3, 10xODU2, 40xODU1, 80xODU0 and
80xODUflex for each resource can be terminated). The traffic coming
from each of the 10GE access links on one access card can be mapped
only into any of the ODU terminations which reside on the same
access card.
The considerations in section 4.5.1 are also applicable here with In the last two cases, only the ODUs terminated on the same access
the only difference that MDSC needs to coordinate with different card where the access links resides can carry the traffic coming
PNCs the setup and control of the OTN linear protection as well as from that 10GE access link. Terminated ODUs can instead be sent to
of the working and protection transport entities (working and any of the OTU4 interfaces
protection LSPs).
Two cases can be considered. In all these cases, terminated ODUs can be sent to any of the OTU4
interfaces assuming the implementation is based on a non-blocking
ODU cross-connect.
In one case, the working and protection transport entities pass If the access links are reported via MPI in some, still to be
through the same PNC domains: defined, client topology, it is possible to report each set of ODU
termination resources as an ODU TTP within the ODU Topology of
Figure 1. and to use either the inter-layer lock-id or the
transitional link, as described in sections 3.4 and 3.10 of
[TE-TOPO], to correlate the access links, in the client
topology, with the ODU TTPs, in the ODU topology, to which access
link are connected to.
Working transport entity: S3, S1, S2, 5.2. YANG Models for Service Configuration
S31, S33, S34,
S15, S18
Protection transport entity: S3, S4, S8, The service configuration procedure is assumed to be initiated (step
S32, 1 in Figure 4) at the CMI from CNC to MDSC. Analysis of the CMI
S12, S17, S18 models is (e.g., L1SM, L2SM, Transport-Service, VN, et al.) is
outside the scope of this document.
In another case, the working and protection transport entities can As described in section 4.3, it is assumed that the CMI YANG models
pass through different PNC domains: provides all the information that allows the MDSC to understand that
it needs to coordinate the setup of a multi-domain ODU connection
(or connection segment) and, when needed, also the configuration of
the adaptation functions in the edge nodes belonging to different
domains.
Working transport entity: S3, S5, S7, |
S11, S12, S17, S18 | {1}
V
----------------
| {2} |
| {3} MDSC |
| |
----------------
^ ^ ^
{3.1} | | |
+---------+ |{3.2} |
| | +----------+
| V |
| ---------- |{3.3}
| | PNC2 | |
| ---------- |
| ^ |
V | {4.2} |
---------- V |
| PNC1 | ----- V
---------- (Network) ----------
^ ( Domain 2) | PNC3 |
| {4.1} ( _) ----------
V ( ) ^
----- C==========D | {4.3}
(Network) / ( ) \ V
( Domain 1) / ----- \ -----
( )/ \ (Network)
A===========B \ ( Domain 3)
/ ( ) \( )
AP-1 ( ) X===========Z
----- ( ) \
( ) AP-2
-----
Protection transport entity: S3, S1, S2, Figure 4 Multi-domain Service Setup
S31, S33, S34,
S15, S18
6.5.2. Segmented Protection As an example, the objective in this section is to configure a
transport service between C-R1 and C-R5. The cross-domain routing is
assumed to be C-R1 <-> S3 <-> S2 <-> S31 <-> S33 <-> S34 <->S15 <->
S18 <-> C-R5.
In order to protect any service defined in section 6.3 from failures According to the different client signal type, there is different
within the OTN multi-domain transport network, the MDSC should be adaptation required.
capable to request each PNC to configure OTN intra-domain protection
when requesting the setup of the ODU2 data plane connection segment.
If linear protection is used within a domain, the considerations in After receiving such request, MDSC determines the domain sequence,
section 4.5.1 are also applicable here only for the PNC controlling i.e., domain 1 <-> domain 2 <-> domain 3, with corresponding PNCs
the domain where intra-domain linear protection is provided. and inter-domain links (step 2 in Figure 4).
If PNC1 provides linear protection, the working and protection As described in [PATH-COMPUTE], the domain sequence can be
transport entities could be: determined by running the MDSC own path computation on the MDSC
internal topology, defined in section 5.1.4, if and only if the MDSC
has enough topology information. Otherwise the MDSC can send path
computation requests to the different PNCs (steps 2.1, 2.2 and 2.3
in Figure 4) and use this information to determine the optimal path
on its internal topology and therefore the domain sequence.
Working transport entity: S3, S1, S2 The MDSC will then decompose the tunnel request into a few tunnel
segments via tunnel model (including both TE tunnel model and OTN
tunnel model), and request different PNCs to setup each intra-domain
tunnel segment (steps 3, 3.1, 3.2 and 3.3 in Figure 4).
Protection transport entity: S3, S4, S8, S2 Assume that each intra-domain tunnel segment can be set up
successfully, and each PNC response to the MDSC respectively. Based
on each segment, MDSC will take care of the configuration of both
the intra-domain tunnel segment and inter-domain tunnel via
corresponding MPI (via TE tunnel model and OTN tunnel model). More
specifically, for the inter-domain configuration, the ts-bitmap and
tpn attributes need to be configured using the OTN Tunnel model
[xxx]. Then the end-to-end OTN tunnel will be ready.
If PNC2 provides linear protection, the working and protection In any case, the access link configuration is done only on the PNCs
transport entities could be: that control the access links (e.g., PNC-1 and PNC-3 in our example)
and not on the PNCs of transit domain (e.g., PNC-2 in our example).
Access link will be configured by MDSC after the OTN tunnel is set
up. Access configuration is different and dependent on the different
type of service. More details can be found in the following
sections.
Working transport entity: S15, S18 5.2.1. ODU Transit Service
Protection transport entity: S15, S12, S17, S18 In this scenario, the access links are configured as ODU Links.
If PNC3 provides linear protection, the working and protection As described in section 4.3.1, the CNC needs to setup an ODU2 end-
transport entities could be: to-end connection, supporting an IP link, between C-R1 and C-R5 and
requests via the CMI to the MDSC the setup of an ODU transit
service.
Working transport entity: S31, S33, S34 From the topology information described in section 5.1 above, the
MDSC understands that C-R1 is attached to the access link
terminating on S3-1 LTP in the ODU Topology exposed by PNC1 and that
C-R5 is attached to the access link terminating on AN2-1 LTP in the
ODU Topology exposed by PNC2.
Protection transport entity: S31, S32, S34 Based on the assumption 0) in section 1.2, MDSC would then request
the PNC1 to setup an ODU2 (Transit Segment) Tunnel between S3-1 and
S6-2 LTPs:
7. Use Case 4: Multi-domain and multi-layer o Source and Destination TTPs are not specified (since it is a
Transit Tunnel)
7.1. Reference Network o Ingress and egress points are indicated in the explicit-route-
objects of the primary path:
The current considerations discussed in this document are based on o The first element of the explicit-route-objects references
the following reference network: the access link terminating on S3-1 LTP
- multiple transport domains: OTN and OCh multi-layer networks o Last element of the explicit-route-objects references the
access link terminating on S6-2 LTP
In this use case, the reference network shown in Figure 3 is used. The configuration of the timeslots used by the ODU2 connection
The only difference is that all the transport nodes are capable to within the transport network domain (i.e., on the internal links) is
switch either in the ODU or in the OCh layer. a matter of the Transport PNC and its interactions with the physical
network elements and therefore is outside the scope of this
document.
All the physical links within each transport network domain are However, the configuration of the timeslots used by the ODU2
therefore assumed to be OCh links, while the inter-domain links are connection at the edge of the transport network domain (i.e., on the
assumed to be ODU links as described in section 6.1 (multi-domain access links) needs to take into account not only the timeslots
with single layer - OTN network). available on the physical nodes at the edge of the transport network
domain (e.g., S3 and S6) but also on the devices, outside of the
transport network domain, connected through these access links
(e.g., C-R1 and C-R3).
Therefore, with the exception of the access and inter-domain links, Based on the assumption 2) in section 1.2, the MDSC, when requesting
no ODU link exists within each domain before an OCh single-domain the Transport PNC to setup the (Transit Segment) ODU2 Tunnel, it
end-to-end data plane connection is created within the network. would also configure the timeslots to be used on the access links.
The MDSC can know the timeslots which are available on the edge OTN
Node (e.g., S3 and S6) from the OTN Topology information exposed by
the Transport PNC at the MPI as well as the timeslots which are
available on the devices outside of the transport network domain
(e.g., C-R1 and C-R3), by means which are outside the scope of this
document.
The controlling hierarchy is the same as described in Figure 4. The Transport PNC performs path computation and sets up the ODU2
cross-connections within the physical nodes S3, S5 and S6, as shown
in section 4.3.1.
The interfaces within the scope of this document are the three MPIs The Transport PNC reports the status of the created ODU2 (Transit
which should be capable to control both the OTN and OCh layers Segment) Tunnel and its path within the ODU Topology as shown in
within each PNC domain. Figure 5 below:
7.2. Topology Abstractions ..................................
: :
: ODU Abstract Topology @ MPI :
: :
: +----+ +----+ :
: | | | | :
: | S1 |--------| S2 |- - - - -(C-R4)
: +----+ +----+ :
: / | :
: / | :
: +----+ +----+ | :
: | | | | | :
(C-R1)- - - - - S3 |---| S4 | | :
:S3-1 <<== + +----+ | :
: = \ | :
: = \ \ | :
: == ---+ \ | :
: = | \ | :
: = S5 | \ | :
: == --+ \ | :
(C-R2)- - - - - = \ \ | :
:S6-1 \ / = \ \ | :
: +--- = +----+ +----+ :
: | = | | | | :
: | S6 = --| S7 |---| S8 |- - - - -(C-R5)
: +--- = +----+ +----+ :
: / = :
(C-R3)- - - - - <<== :
:S6-2 :
:................................:
Each PNC should provide the MDSC a topology abstraction of its own Figure 5 ODU2 Transit Tunnel
network topology as described in section 5.2.
As an example, it is assumed that: 5.2.2. EPL over ODU Service
o PNC1 provides a type A grey topology abstraction (likewise in use In this scenario, the access links are configured as Ethernet Links.
case 2 described in section 5.2)
o PNC2 provides a type B grey topology abstraction (likewise in use As described in section 4.3.2, the CNC needs to setup an EPL
case 3 described in section 6.2) service, supporting an IP link, between C-R1 and C-R3 and requests
this service at the CMI to the MDSC.
o PNC3 provides a type B grey topology abstraction with two MDSC needs to setup an EPL service between C-R1 and C-R3 supported
abstract nodes, likewise in use case 3 described in section 6.2, by an ODU2 end-to-end connection between S3 and S6.
and hiding at least some optical parameters to be used within the
OCh layer, likewise in use case 2 described in section 5.2.
7.3. Service Configuration As described in section 5.1.5 above, it is not clear in this case
how the Ethernet access links between the transport network and the
IP router, are reported by the PNC to the MDSC.
The same service scenarios, as described in section 6.3, are also If the 10GE physical links are not reported as ODU links within the
applicable to these use cases with the only difference that single- ODU topology information, described in section 5.1.1 above, than the
domain end-to-end OCh data plane connections needs to be setup MDSC will not have sufficient information to know that C-R1 and C-R3
before ODU data plane connections. are attached to nodes S3 and S6.
8. Security Considerations Assuming that the MDSC knows how C-R1 and C-R3 are attached to the
transport network, the MDSC would request the Transport PNC to setup
an ODU2 end-to-end Tunnel between S3 and S6.
Typically, OTN networks ensure a high level of security and data This ODU Tunnel is setup between two TTPs of nodes S3 and S6. In
privacy through hard partitioning of traffic onto isolated circuits. case nodes S3 and S6 support more than one TTP, the MDSC should
decide which TTP to use.
There may be additional security considerations applied to specific As discussed in 5.1.5, depending on the different hardware
use cases, but common security considerations do exist and these implementations of the physical nodes S3 and S6, not all the access
must be considered for controlling underlying infrastructure to links can be connected to all the TTPs. The MDSC should therefore
deliver transport services: not only select the optimal TTP but also a TTP that would allow the
Tunnel to be used by the service.
o use of RESCONF and the need to reuse security between RESTCONF It is assumed that in case node S3 or node S6 supports only one TTP,
components; this TTP can be accessed by all the access links.
o use of authentication and policy to govern which transport Once the ODU2 Tunnel setup has been requested, unless there is a
services may be requested by the user or application; one-to-one relationship between the S3 and S6 TTPs and the Ethernet
access links toward C-R1 and C-R3 (as in the case, described in
section 5.1.5, where the Ethernet access links reside on
different/dedicated access card such that the ODU2 tunnel can only
carry the Ethernet traffic from the only Ethernet access link on the
same access card where the ODU2 tunnel is terminated), the MDSC also
needs to request the setup of an EPL service from the access links
on S3 and S6, attached to C-R1 and C-R3, and this ODU2 Tunnel.
o how secure and isolated connectivity may also be requested as an 5.2.3. Other OTN Client Services
element of a service and mapped down to the OTN level.
9. IANA Considerations In this scenario, the access links are configured as one of the OTN
clients (e.g., STM-64) links.
As described in section 4.3.3, the CNC needs to setup an STM-64
Private Link service, supporting an IP link, between C-R1 and C-R3
and requests this service at the CMI to the MDSC.
MDSC needs to setup an STM-64 Private Link service between C-R1 and
C-R3 supported by an ODU2 end-to-end connection between S3 and S6.
As described in section 5.1.5 above, it is not clear in this case
how the access links (e.g., the STM-N access links) between the
transport network and the IP router, are reported by the PNC to the
MDSC.
The same issues, as described in section 5.2.2, apply here:
o the MDSC needs to understand that C-R1 and C-R3 are connected,
thought STM-64 access links, with S3 and S6
o the MDSC needs to understand which TTPs in S3 and S6 can be
accessed by these access links
o the MDSC needs to configure the private line service from these
access links through the ODU2 tunnel
5.2.4. EVPL over ODU Service
In this scenario, the access links are configured as Ethernet links,
as described in section 5.2.2 above.
As described in section 4.3.4, the CNC needs to setup EVPL services,
supporting IP links, between C-R1 and C-R3, as well as between C-R1
and C-R4 and requests these services at the CMI to the MDSC.
MDSC needs to setup two EVPL services, between C-R1 and C-R3, as
well as between C-R1 and C-R4, supported by ODU0 end-to-end
connections between S3 and S6 and between S3 and S2 respectively.
As described in section 5.1.5 above, it is not clear in this case
how the Ethernet access links between the transport network and the
IP router, are reported by the PNC to the MDSC.
The same issues, as described in section 5.1.5 above, apply here:
o the MDSC needs to understand that C-R1, C-R3 and C-R4 are
connected, thought the Ethernet access links, with S3, S6 and S2
o the MDSC needs to understand which TTPs in S3, S6 and S2 can be
accessed by these access links
o the MDSC needs to configure the EVPL services from these access
links through the ODU0 tunnels
In addition, the MDSC needs to get the information that the access
links on S3, S6 and S2 are capable to support EVPL (rather than just
EPL) as well as to coordinate the VLAN configuration, for each EVPL
service, on these access links (this is a similar issue as the
timeslot configuration on access links discussed in section 4.3.1
above).
5.3. YANG Models for Protection Configuration
5.3.1. Linear Protection (end-to-end)
To be discussed in future versions of this document.
5.3.2. Segmented Protection
To be discussed in future versions of this document.
6. Detailed JSON Examples
6.1. JSON Examples for Topology Abstractions
6.1.1. Domain 1 White Topology Abstraction
Section 5.1.1 describes how PNC1 can provide a white topology
abstraction to the MDSC via the MPI. Figure 3. is an example of
such ODU Topology.
This section provides the detailed JSON code describing how this ODU
Topology is reported by the PNC, using the [TE-TOPO] and [OTN-TOPO]
YANG models at the MPI.
JSON code "mpi1-otn-topology.json" has been provided at in the
appendix of this document.
6.2. JSON Examples for Service Configuration
6.2.1. ODU Transit Service
Section 5.2.1 describes how the MDSC can request PNC1, via the MPI,
to setup an ODU2 transit service over an ODU Topology described in
section 5.1.1.
This section provides the detailed JSON code describing how the
setup of this ODU2 transit service can be requested by the MDSC,
using the [TE-TUNNEL] and [OTN-TUNNEL] YANG models at the MPI.
JSON code "mpi1-odu2-service-config.json" has been provided at in
the appendix of this document.
6.3. JSON Example for Protection Configuration
To be added
7. Security Considerations
This section is for further study
8. IANA Considerations
This document requires no IANA actions. This document requires no IANA actions.
10. References 9. References
10.1. Normative References 9.1. Normative References
[RFC7926] Farrel, A. et al., "Problem Statement and Architecture for [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for
Information Exchange between Interconnected Traffic- Information Exchange between Interconnected Traffic-
Engineered Networks", BCP 206, RFC 7926, July 2016. Engineered Networks", BCP 206, RFC 7926, July 2016.
[RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and [RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and
Restoration) Terminology for Generalized Multi-Protocol Restoration) Terminology for Generalized Multi-Protocol
Label Switching (GMPLS)", RFC 4427, March 2006. Label Switching (GMPLS)", RFC 4427, March 2006.
[ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for
Abstraction and Control of Transport Networks", draft- Abstraction and Control of Transport Networks", draft-
ietf-teas-actn-framework, work in progress. ietf-teas-actn-framework, work in progress.
[ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interfaces [ITU-T G.709] ITU-T Recommendation G.709 (06/16), "Interfaces for
for the optical transport network", June 2016. the optical transport network", June 2016.
[ITU-T G.808.1-2014] ITU-T Recommendation G.808.1 (05/14), "Generic [ITU-T G.808.1] ITU-T Recommendation G.808.1 (05/14), "Generic
protection switching - Linear trail and subnetwork protection switching - Linear trail and subnetwork
protection", May 2014. protection", May 2014.
[ITU-T G.873.1-2014] ITU-T Recommendation G.873.1 (05/14), "Optical [ITU-T G.873.1] ITU-T Recommendation G.873.1 (05/14), "Optical
transport network (OTN): Linear protection", May 2014. transport network (OTN): Linear protection", May 2014.
10.2. Informative References [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies",
[TE-Topo] Liu, X. et al., "YANG Data Model for TE Topologies",
draft-ietf-teas-yang-te-topo, work in progress. draft-ietf-teas-yang-te-topo, work in progress.
[OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical
Transport Network Topology", draft-ietf-ccamp-otn-topo-
yang, work in progress.
[CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer
Topology", draft-zheng-ccamp-client-topo-yang, work in
progress.
[TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic
Engineering Tunnels and Interfaces", draft-ietf-teas-yang-
te, work in progress.
[PATH-COMPUTE] Busi, I., Belotti, S. et al, "Yang model for
requesting Path Computation", draft-busibel-teas-yang-
path-computation, work in progress.
[OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft-
ietf-ccamp-otn-tunnel-model, work in progress.
[CLIENT-SVC] Zheng, H. et al., "A YANG Data Model for Optical
Transport Network Client Signals", draft-zheng-ccamp-otn-
client-signal-yang, work in progress.
9.2. Informative References
[RFC5151] Farrel, A. et al., "Inter-Domain MPLS and GMPLS Traffic
Engineering --Resource Reservation Protocol-Traffic
Engineering (RSVP-TE) Extensions", RFC 5151, February
2008.
[RFC6898] Li, D. et al., "Link Management Protocol Behavior
Negotiation and Configuration Modifications", RFC 6898,
March 2013.
[RFC8309] Wu, Q. et al., "Service Models Explained", RFC 8309,
January 2018.
[ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for
Abstraction and Control of Traffic Engineered Networks", Abstraction and Control of Traffic Engineered Networks",
draft-zhang-teas-actn-yang, work in progress. draft-zhang-teas-actn-yang, work in progress.
[Path-Compute] Busi, I., Belotti, S. et al., " Yang model for [I2RS-TOPO] Clemm, A. et al., "A Data Model for Network Topologies",
requesting Path Computation", draft-busibel-teas-yang- draft-ietf-i2rs-yang-network-topo, work in progress.
path-computation, work in progress.
[ONF TR-527] ONF Technical Recommendation TR-527, "Functional [ONF TR-527] ONF Technical Recommendation TR-527, "Functional
Requirements for Transport API", June 2016. Requirements for Transport API", June 2016.
[ONF GitHub] ONF Open Transport (SNOWMASS) [ONF GitHub] ONF Open Transport (SNOWMASS)
https://github.com/OpenNetworkingFoundation/Snowmass- https://github.com/OpenNetworkingFoundation/Snowmass-
ONFOpenTransport ONFOpenTransport
11. Acknowledgments 10. Acknowledgments
The authors would like to thank all members of the Transport NBI The authors would like to thank all members of the Transport NBI
Design Team involved in the definition of use cases, gap analysis Design Team involved in the definition of use cases, gap analysis
and guidelines for using the IETF YANG models at the Northbound and guidelines for using the IETF YANG models at the Northbound
Interface (NBI) of a Transport SDN Controller. Interface (NBI) of a Transport SDN Controller.
The authors would like to thank Xian Zhang, Anurag Sharma, Sergio The authors would like to thank Xian Zhang, Anurag Sharma, Sergio
Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar
Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated
the work on gap analysis for transport NBI and having provided the work on gap analysis for transport NBI and having provided
foundations work for the development of this document. foundations work for the development of this document.
The authors would like to thank the authors of the TE Topology and
Tunnel YANG models [TE-TOPO] and [TE-TUNNEL], in particular Igor
Bryskin, Vishnu Pavan Beeram, Tarek Saad and Xufeng Liu, for their
support in addressing any gap identified during the analysis work.
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Authors' Addresses Appendix A. Detailed JSON Examples
A.1. JSON Code: mpi1-otn-topology.json
The JSON code for this use case is currently located on GitHub at:
https://github.com/danielkinguk/transport-nbi/blob/master/Internet-
Drafts/Applicability-Statement/01/mpi1-otn-topology.json
A.2. JSON Code: mpi1-odu2-service-config.json
The JSON code for this use case is currently located on GitHub at:
https://github.com/danielkinguk/transport-nbi/blob/master/Internet-
Drafts/Applicability-Statement/01/mpi1-odu2-service-config.json
Appendix B. Validating a JSON fragment against a YANG Model
The objective is to have a tool that allows validating whether a
piece of JSON code is compliant with a YANG model without using a
client/server.
B.1. DSDL-based approach
The idea is to generate a JSON driver file (JTOX) from YANG, then
use it to translate JSON to XML and validate it against the DSDL
schemas, as shown in Figure 6.
Useful link: https://github.com/mbj4668/pyang/wiki/XmlJson
(2)
YANG-module ---> DSDL-schemas (RNG,SCH,DSRL)
| |
| (1) |
| |
Config/state JTOX-file | (4)
\ | |
\ | |
\ V V
JSON-file------------> XML-file ----------------> Output
(3)
Figure 6 - DSDL-based approach for JSON code validation
In order to allow the use of comments following the convention
defined in section Section 2. without impacting the validation
process, these comments will be automatically removed from the
JSON-file that will be validate.
B.2. Why not using a XSD-based approach
This approach has been analyzed and discarded because no longer
supported by pyang.
The idea is to convert YANG to XSD, JSON to XML and validate it
against the XSD, as shown in Figure 7:
(1)
YANG-module ---> XSD-schema - \ (3)
+--> Validation
JSON-file------> XML-file ----/
(2)
Figure 7 - XSD-based approach for JSON code validation
The pyang support for the XSD output format was deprecated in 1.5
and removed in 1.7.1. However pyang 1.7.1 is necessary to work with
YANG 1.1 so the process shown in Figure 7 will stop just at step
(1).
Authors' Addresses
Italo Busi (Editor) Italo Busi (Editor)
Huawei Huawei
Email: italo.busi@huawei.com Email: italo.busi@huawei.com
Daniel King (Editor) Daniel King (Editor)
Lancaster University Lancaster University
Email: d.king@lancaster.ac.uk Email: d.king@lancaster.ac.uk
Haomian Zheng (Editor)
Huawei
Email: zhenghaomian@huawei.com
Yunbin Xu (Editor)
CAICT
Email: xuyunbin@ritt.cn
Yang Zhao
China Mobile
Email: zhaoyangyjy@chinamobile.com
Sergio Belotti Sergio Belotti
Nokia Nokia
Email: sergio.belotti@nokia.com Email: sergio.belotti@nokia.com
Gianmarco Bruno Gianmarco Bruno
Ericsson Ericsson
Email: gianmarco.bruno@ericsson.com
Email: gianmarco.bruno@ericsson.com
Young Lee Young Lee
Huawei Huawei
Email: leeyoung@huawei.com Email: leeyoung@huawei.com
Victor Lopez Victor Lopez
Telefonica Telefonica
Email: victor.lopezalvarez@telefonica.com Email: victor.lopezalvarez@telefonica.com
Carlo Perocchio Carlo Perocchio
Ericsson Ericsson
Email: carlo.perocchio@ericsson.com Email: carlo.perocchio@ericsson.com
Haomian Zheng Ricard Vilalta
Huawei CTTC
Email: zhenghaomian@huawei.com
Email: ricard.vilalta@cttc.es
 End of changes. 232 change blocks. 
706 lines changed or deleted 1332 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/