CCAMP Working Group                                           I. Busi (Ed.)
Internet Draft                                                 Huawei
Intended status: Informational                                D. King
                                                  Lancaster University
                                                             H. Zheng
                                                               Huawei
                                                                Y. Xu
                                                                CAICT

Expires: August 26, September 2018                             February 26,                                  March 5, 2018

           Transport Northbound Interface Applicability Statement and Use Cases
              draft-ietf-ccamp-transport-nbi-app-statement-00
              draft-ietf-ccamp-transport-nbi-app-statement-01

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html

   This Internet-Draft will expire on August 26, September 5, 2018.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info)
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Abstract

   Transport network domains, including Optical Transport Network (OTN)
   and Wavelength Division Multiplexing (WDM) networks, are typically
   deployed based on a single vendor or technology platforms. They are
   often managed using proprietary interfaces to dedicated Element
   Management Systems (EMS), Network Management Systems (NMS) and
   increasingly Software Defined Network (SDN) controllers.

   A well-defined open interface to each domain management system or
   controller is required for network operators to facilitate control
   automation and orchestrate end-to-end services across multi-domain
   networks. These functions may be enabled using standardized data
   models (e.g. YANG), and appropriate protocol (e.g., RESTCONF).

   This document describes the key use cases and requirements to be
   used as analyses the basis for applicability statements analyzing how IETF
   data of the YANG models can be used for transport network control being
   defined by IETF (TEAS and CCAMP WGs in particular) to support OTN
   single and
   management. multi-domain scenarios.

Table of Contents

   1. Introduction ................................................ 3 Introduction..................................................3
      1.1. Scope of this document                                   4 document...................................4
      1.2. Assumptions..............................................5
   2. Terminology ................................................. 4 Terminology...................................................5
   3. Conventions used in this document                             4 document.............................6
      3.1. Topology and traffic flow processing ................... 4 processing.....................6
      3.2. JSON code................................................7
   4. Use Case 1: Single-domain with single-layer ................. 5 Scenarios Description.........................................8
      4.1. Reference Network ...................................... 5 Network........................................8
         4.1.1. Single Transport Domain - OTN Network ............. 5 Single-Domain Scenario.............................10
         4.1.2. Multi-Domain Scenario..............................10
      4.2. Topology Abstractions .................................. 8 Abstractions...................................10
      4.3. Service Configuration .................................. 9 Configuration...................................12
         4.3.1. ODU Transit ....................................... 9 Transit........................................13
         4.3.2. EPL over ODU ..................................... 10 ODU.......................................13
         4.3.3. Other OTN Client Services ........................ 10 Clients Services.........................14
         4.3.4. EVPL over ODU .................................... 11 ODU......................................15
         4.3.5. EVPLAN and EVPTree Services ...................... 12 Services........................16
         4.3.6. Dynamic Service Configuration......................18

      4.4. Multi-functional Multi-function Access Links ......................... 13 Links.............................18
      4.5. Protection Requirements ............................... 14 and Restoration Configuration................19
         4.5.1. Linear Protection ................................ 15 (end-to-end).....................20
         4.5.2. Segmented Protection...............................21
         4.5.3. End-to-End Dynamic Restoration.....................21
         4.5.4. Segmented Dynamic Restoration......................22
      4.6. Service Modification and Deletion.......................23
      4.7. Notification............................................23
      4.8. Path Computation with Constraint........................23
   5. Use Case 2: Single-domain with multi-layer ................. 15 YANG Model Analysis..........................................24
      5.1. Reference Network ..................................... 15
      5.2. YANG Models for Topology Abstractions ................................. 16
      5.3. Service Configuration ................................. 16
   6. Use Case 3: Abstraction....................24
         5.1.1. Domain 1 Topology Abstraction......................25
         5.1.2. Domain 2 Grey (Type A) Topology Abstraction........26
         5.1.3. Domain 3 Grey (Type B) Topology Abstraction........26
         5.1.4. Multi-domain with single-layer ................. 16
      6.1. Reference Network ..................................... 16
      6.2. Topology Abstractions ................................. 19
      6.3. Stitching....................26
         5.1.5. Access Links.......................................27
      5.2. YANG Models for Service Configuration ................................. 19
         6.3.1. Configuration...................28
         5.2.1. ODU Transit ...................................... 20
         6.3.2. Service................................30
         5.2.2. EPL over ODU ..................................... 20
         6.3.3. Service...............................32
         5.2.3. Other OTN Client Services ........................ 21
         6.3.4. Services..........................33
         5.2.4. EVPL over ODU .................................... 21
         6.3.5. EVPLAN and EVPTree Services ...................... 21
      6.4. Multi-functional Access Links ......................... 22
      6.5. Service..............................34
      5.3. YANG Models for Protection Scenarios .................................. 22
         6.5.1. Configuration................35
         5.3.1. Linear Protection (end-to-end) ................... 23
         6.5.2. (end-to-end).....................35
         5.3.2. Segmented Protection ............................. 23
   7. Use Case 4: Multi-domain and multi-layer ................... 24
      7.1. Reference Network ..................................... 24
      7.2. Protection...............................35
   6. Detailed JSON Examples.......................................35
      6.1. JSON Examples for Topology Abstractions ................................. 25
      7.3. Abstractions.................35
         6.1.1. Domain 1 White Topology Abstraction................35
      6.2. JSON Examples for Service Configuration ................................. 25
   8. Configuration.................35
         6.2.1. ODU Transit Service................................35
      6.3. JSON Example for Protection Configuration...............36
   7. Security Considerations .................................... 25
   9. Considerations......................................36
   8. IANA Considerations ........................................ 26
   10. References ................................................ 26
      10.1. Considerations..........................................36
   9. References...................................................36
      9.1. Normative References ................................. 26
      10.2. References....................................36
      9.2. Informative References ............................... 26
   11. Acknowledgments ........................................... 27 References..................................37
   10. Acknowledgments.............................................38
   Appendix A. Detailed JSON Examples..............................39
      A.1. JSON Code: mpi1-otn-topology.json.......................39
      A.2. JSON Code:  mpi1-odu2-service-config.json...............39
   Appendix B. Validating a JSON fragment against a YANG Model.....40
      B.1. DSDL-based approach.....................................40
      B.2. Why not using a XSD-based approach......................40

1. Introduction

   Transport of packet services are critical for a wide-range of
   applications and services, including: data center and LAN
   interconnects, Internet service backhauling, mobile backhaul and
   enterprise Carrier Ethernet Services. These services are typically
   setup using stovepipe NMS and EMS platforms, often requiring
   propriety management platforms and legacy management interfaces. A
   clear goal of operators will be to automate setup of transport
   services across multiple transport technology domains.

   A common open interface (API) to each domain controller and or
   management system is pre-requisite for network operators to control
   multi-vendor and multi-domain networks and enable also service
   provisioning coordination/automation. This can be achieved by using
   standardized YANG models, used together with an appropriate protocol
   (e.g., RESTCONF).

   This document describes key use cases for analyzing analyses the applicability of the YANG models being
   defined by the IETF for transport
   networks. The intention of this document is to provide the base
   reference scenarios for applicability statements that will describe (TEAS and CCAMP WGs in details how IETF transport models are applied particular) to solve the
   described use cases support OTN
   single and requirements. multi-domain scenarios.

1.1. Scope of this document

   This document assumes a reference architecture, including
   interfaces, based on the Abstraction and Control of Traffic-
   Engineered Networks (ACTN), defined in [ACTN-Frame] [ACTN-Frame].

   The focus of this document is on the MPI (interface between the
   Multi Domain Service Coordinator (MDSC) and a Physical Network
   Controller (PNC), controlling a transport network domain).

   The relationship between the current IETF YANG models and

   It is worth noting that the type
   of ACTN interfaces can same MPI analyzed in this document could
   be found used between hierarchical MDSC controllers, as shown in [ACTN-YANG].

   The ONF Technical Figure 4
   of [ACTN-Frame].

   Detailed analysis of the CMI (interface between the Customer Network
   Controller (CNC) and the MDSC) as well as of the interface between
   service and network orchestrators are outside the scope of this
   document. However, some considerations and assumptions about the
   information could be described when needed.

   The relationship between the current IETF YANG models and the type
   of ACTN interfaces can be found in [ACTN-YANG]. Therefore, it
   considers the TE Topology YANG model defined in [TE-TOPO], with the
   OTN Topology augmentation defined in [OTN-TOPO] and the TE Tunnel
   YANG model defined in [TE-TUNNEL], with the OTN Tunnel augmentation
   defined in [OTN-TUNNEL].

   The analysis of how to use the attributes in the I2RS Topology YANG
   model, defined in [I2RS-TOPO], is for further study.

   The ONF Technical Recommendations for Functional Requirements for
   the transport API in [ONF TR-527] and the ONF transport API multi-
   layer
   domain examples in [ONF GitHub] have been considered as an input for
   defining the reference scenarios analyzed in this work.

   Considerations about document.

1.2. Assumptions

   This document is making the CMI (interface between following assumptions, still to be
   validated with TEAS WG:

   1. The MDSC can request, at the Customer Network
   Controller (CNC) and MPI, a PNC to setup a Transit Tunnel
      Segment using the MDSC) TE Tunnel YANG model: in this case, since the
      endpoints of the E2E Tunnel are outside the scope domain controlled by
      that PNC, the MDSC would not specify any source or destination
      TTP (i.e., it would leave the source, destination, src-tp-id and
      dst-tp-id attributes empty) and it would use the explicit-route-
      object list to specify the ingress and egress links of this
   document. the
      Transit Tunnel Segment.

   2. Each PNC provides to the MDSC, at the MPI, the list of available
      timeslots on the inter-domain links using the TE Topology YANG
      model and OTN Topology augmentation. The TE Topology YANG model
      in [TE-TOPO] is being updated to report the label set
      information.

   This document is also making the following assumptions, still to be
   validated with CCAMP WG:

2. Terminology

   Domain: defined as a collection of network elements within a common
   realm of address space or path computation responsibility [RFC5151]

   E-LINE: Ethernet Line

   EPL: Ethernet Private Line

   EVPL: Ethernet Virtual Private Line

   OTH: Optical Network Hierarchy

   OTN: Optical Transport Network

3.

   Service: A service in the context of this document can be considered
   as some form of connectivity between customer sites across the
   network operator's network [RFC8309]

   Service Model: As described in [RFC8309] it describes a service and
   the parameters of the service in a portable way that can be used
   uniformly and independent of the equipment and operating
   environment.

   UNI: User Network Interface

   MDSC: Multi-Domain Service Coordinator

   CNC: Customer Network Controller

   PNC: Provisioning Network Controller

   MAC Bridging: Virtual LANs (VLANs) on IEEE 802.3 Ethernet network

3. Conventions used in this document

3.1. Topology and traffic flow processing

   The traffic flow between different nodes is specified as an ordered
   list of nodes, separated with commas, indicating within the brackets
   the processing within each node:

      <node> (<processing>) {, (<processing>){, <node> (<processing>)}

   The order represents the order of traffic flow being forwarded
   through the network.

   The processing can be either an adaptation of a client layer into a
   server layer "(client -> server)" or switching at a given layer
   "([switching])". Multi-layer switching is indicated by two layer
   switching with client/server adaptation: "([client] -> [server])".

   For example, the following traffic flow:

      C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]),
      C-R3 (ODU2 -> [PKT])

   Node C-R1 is switching at the packet (PKT) layer and mapping packets
   into a an ODU2 before transmission to node S3. Nodes S3, S5 and S6 are
   switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which
   then sends it to S6 which finally sends to C-R3. Node C-R3
   terminates the ODU2 from S6 before switching at the packet (PKT)
   layer.

   The paths of working and protection transport entities are specified
   as an ordered list of nodes, separated with commas:

      <node> {, <node>}

   The order represents the order of traffic flow being forwarded
   through the network in the forward direction. In case of
   bidirectional paths, the forward and backward directions are
   selected arbitrarily, but the convention is consistent between
   working/protection path pairs as well as across multiple domains.

4. Use Case 1: Single-domain

3.2. JSON code

   This document provides some detailed JSON code examples to describe
   how the YANG models being developed by IETF (TEAS and CCAMP WG in
   particular) can be used.

   The examples are provided using JSON because JSON code is easier for
   humans to read and write.

   Different objects need to have an identifier. The convention used to
   create mnemonic identifiers is to use the object name (e.g., S3 for
   node S3), followed by its type (e.g., NODE), separated by an "-",
   followed by "-ID". For example, the mnemonic identifier for node S3
   would be S3-NODE-ID.

   JSON language does not support the insertion of comments that have
   been instead found to be useful when writing the examples. This
   document inserts comments into the JSON code as JSON name/value pair
   with single-layer

4.1. Reference Network the JSON name string starting with the "//" characters. For
   example, when describing the example of a TE Topology instance
   representing the ODU Abstract Topology exposed by the Transport PNC,
   the following comment has been added to the JSON code:

      "// comment": "ODU Abstract Topology @ MPI",

   The current considerations discussed JSON code examples provided in this document are based on have been validated
   against the YANG models following reference networks:

        - single transport domain: OTN network

4.1.1. Single Transport Domain - OTN Network

   As shown the validation process described
   in Figure 1 Appendix B, which would not consider the network physical topology composed comments.

   In order to have successful validation of a
   single-domain transport network providing transport services the examples, some
   numbering scheme has been defined to an
   IP network through five access links.

           ................................................
           :                 IP domain                    :
           :        ..............................        :
           :        :  ........................  :        :
           :        :  :                      :  : assign identifiers to the
   different entities which would pass the syntax checks. In that case,
   to simplify the reading, another JSON name/value pair, formatted as
   a comment and using the mnemonic identifiers is also provided. For
   example, the identifier of node S3 (S3-NODE-ID) has been assumed to
   be "10.0.0.3" and would be shown in the JSON code example using the
   two JSON name/value pair:

      "// te-node-id": "S3-NODE-ID",

      "te-node-id": "10.0.0.3",

   The first JSON name/value pair will be automatically removed in the
   first step of the validation process while the second JSON
   name/value pair will be validate against the YANG model definitions.

4. Scenarios Description

4.1. Reference Network

   The physical topology of the reference network is shown in Figure 1.
   It represents an OTN network composed of three transport network
   domains providing transport services to an IP customer network
   through eight access links:

                ........................
   ..........   :                      :
            :   :      S1 -------- S2 ------ C-R4   Network domain 1   :   .............
    Customer:   :                      :   :     /             |           :
     domain :   :     S1 -------+      :   :  Network  :
            :   :    /              |  :           \     :   :  domain 3 :   ..........
      C-R1 ------ ------- S3 ----- S4      |    \    :   :           :   :
            :   :    \        \     |  :  :        :    S2 --------+        :   :Customer
            :   :     \        \    |  :   :   \       :   : domain
            :   :      S5       \   |  :   :    \      :   :
      C-R2 -----+ ------+    /  \       \  |  :   :        :
           :    S31 --------- C-R7
            :   : \  /    \       \ |  :   :   /   \   :   :
            :   :  S6 ---- S7 ---- S8 ------ C-R5 S32   S33 ------ C-R8
            :   : /        |       |   :   : / \   /   :   :.......
      C-R3 ------+         |       |   :   :/   S34    :          :  C-R3 -----+
            :   :..........|.......|...:   /    /      :          :
    ........:              |       |      /:.../.......:          :
                           |       |     /    /                   :
                ...........|.......|..../..../...                 :
                :          |       |   /    /   :    ..............
                : Network  |       |  /    /    :    :
                : domain 2 |       | /    /     :    :Customer
                :         S11 ---- S12   /      :    :   Transport domain
                :        /          | \ /       :    :
                :     S13     S14   | S15 ------------- C-R4
                :     |  \   /   \  |    \      :    :
                :     |   S16     \ |     \     :    :
                :     |  /         S17 -- S18 --------- C-R5
                :     | /             \   /     :    :
                :    S19 ---- S20 ---- S21 ------------ C-R6
                :                               :    :
           :........:  :......................:  :........:
                :...............................:    :.............

                         Figure 1 Reference network for Use Case 1

   The IP and transport (OTN) domains are respectively composed by five
   routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The
   transport domain acts as a transit network providing connectivity
   for IP layer services.

   The behavior of the transport domain is the same whether the
   ingress or egress service nodes in the IP domain are only attached
   to the transport domain, or if there are other routers in between
   the ingress or egress nodes of the IP domain not also attached to
   the transport domain. In other words, the behavior of the transport
   network does not depend on whether C-R1, C-R2, ..., C-R5 are PE or P
   routers for the IP services.

   The transport domain control plane architecture architecture, shown in Figure 2,
   follows the ACTN architecture and framework document [ACTN-Frame],
   and functional components:

   o Customer Network Controller (CNC) act as a client with respect to
      the Multi-Domain Service Coordinator (MDSC) via the CNC-MDSC
      Interface (CMI);

   o MDSC is connected to a plurality of Physical Network Controllers
      (PNCs), one for each domain, via a MDSC-PNC Interface (MPI). Each
      PNC is responsible only for the control of its domain and the
      MDSC is the only entity capable of multi-domain functionalities
      as well as of managing the inter-domain links;

   The ACTN framework facilitates the detachment of the network and
   service control from the underlying technology and help the customer
   express the network as desired by business needs. Therefore, care
   must be taken to keep minimal dependency on the CMI (or no
   dependency at all) with respect to the network domain technologies.
   The MPI instead requires some specialization according to the domain
   technology.

                                 +-----+

                           --------------
                          |              |
                          |     CNC      |
                                 +-----+
                          |
                                    |CMI I/F              |
                         +-----------------------+
                           --------------
                                 |
             ....................|....................... CMI
                                 |
                          ----------------
                         |                |
                         |      MDSC      |
                         +-----------------------+
                         |
                                    |MPI I/F                |
                                +-------+
                          ----------------
                            /   |  PNC    \
                           /    |
                                +-------+     \
            ............../.....|......\................ MPIs
                         /      |
                                  -----
                                (       )
                               (   OTN   )       \
                        /   ----------   \
                       /   |   PNC2   |   \
                      /     ----------     \
             ----------         |           \
            |   PNC1   |      -----          \
             ----------     (       )      ----------
                 |         (         )    |   PNC3   |
               -----      (  Network  )    ----------
             (       )    (  Domain 2 )        |
            (         )    (         )       -----
           (  Network  )    (       )      (       )
           (  Domain 1 )      -----       (         )
            ( Physical         )                  (  Network  )
             (       )                   (  Domain 3 )
               -----                      (         )
                                           (       )
                                             -----

                      Figure 2 Controlling Hierarchy for Use Case 1

   Once

   The ACTN framework facilitates the detachment of the network and
   service request is processed control from the underlying technology and help the customer
   express the network as desired by business needs. Therefore, care
   must be taken to keep minimal dependency on the MDSC CMI (or no
   dependency at all) with respect to the mapping of network domain technologies.

   The MPI instead requires some specialization according to the
   client IP traffic between domain
   technology.

   In this document we address the routers (across use case where the transport network)
   is made in CNC controls: the
   customer IP routers only and is not controlled by the
   transport PNC, network and therefore transparent to requests, at the CMI, transport nodes.

4.2. Topology Abstractions

   Abstraction provides a selective method for representing connectivity information within a domain. There are multiple methods
   among IP routers to abstract an MDSC which coordinates, via three MPIs, the
   control of a multi-domain transport network topology. This via three PNCs.

   The interfaces within scope of this document assumes are the
   abstraction method defined in [RFC7926]:

     "Abstraction is three MPIs,
   while the process interface between the CNC and the IP routers is out of applying policy
   scope of this document. It is also assumed that the CMI allows the
   CNC to provide all the available information that is required by the MDSC to
   properly configure the transport connectivity requested by the
   customer.

4.1.1. Single-Domain Scenario

   In case the CNC requests transport connectivity between IP routers
   attached to the same transport domain (e.g., between C-R1 and C-R3),
   the MDSC can pass the service request to the PNC (e.g., PNC1) and
   let the PNC takes decisions about how to implement the service.

4.1.2. Multi-Domain Scenario

   In case the CNC requests transport connectivity between IP routers
   attached to different transport domain (e.g., between C-R1 and C-
   R5), the MDSC can split the service request into tunnel segment
   configuration and then pass to multiple PNCs (PNC1 and PNC2 in this
   example) and let the PNC takes decisions about how to deploy the
   service.

4.2. Topology Abstractions

   Abstraction provides a selective method for representing
   connectivity information within a domain. There are multiple methods
   to abstract a network topology. This document assumes the
   abstraction method defined in [RFC7926]:

     "Abstraction is the process of applying policy to the available TE
     information within a domain, to produce selective information that
     represents the potential ability to connect across the domain.
     Thus, abstraction does not necessarily offer all possible
     connectivity options, but presents a general view of potential
     connectivity according to the policies that determine how the
     domain's administrator wants to allow the domain resources to be
     used."

   [TE-Topo] describes Describes a YANG base model for TE topology without any
   technology specific parameters. Moreover, it defines how to abstract
   for TE-network topologies.

   [ACTN-Frame] provides Provides the context of topology abstraction in the
   ACTN architecture and discusses a few alternatives for the
   abstraction methods for both packet and optical networks. This is an
   important consideration since the choice of the abstraction method
   impacts protocol design and the information it carries.  According
   to [ACTN-Frame], there are three types of topology:

   o White topology: This is a case where the Physical Network
      Controller (PNC) PNC provides the actual
      network topology to the
      multi-domain Service Coordinator (MDSC) MDSC without any hiding or filtering. In
      this case, the MDSC has the full knowledge of the underlying
      network topology;

   o Black topology: The entire domain network is abstracted as a
      single virtual node with the access/egress links without
      disclosing any node internal connectivity information;

   o Grey topology: This abstraction level is between black topology
      and white topology from a granularity point of view. This is
      abstraction of TE tunnels for all pairs of border nodes. We may
      further differentiate from a perspective of how to abstract
      internal TE resources between the pairs of border nodes:

        - Grey topology type A: border nodes with a TE links between
          them in a full mesh fashion;

        - Grey topology type B: border nodes with some internal
          abstracted nodes and abstracted links.

   For single-domain with single-layer use-case, the white topology may
   be disseminated from the

   Each PNC to should provide the MDSC in most cases. There may be
   some exception to this in the case where a topology abstraction of the underlay
   domain's network may
   have complex optical parameters, which do not warrant topology.

   Each PNC provides topology abstraction of its own domain topology
   independently from each other and therefore it is possible that
   different PNCs provide different types of topology abstractions.

   The MPI operates on the
   distribution abstract topology regardless on the type of such details to
   abstraction provided by the MDSC. In such case, PNC.

   To analyze how the topology
   disseminated MPI operates on abstract topologies independently
   from the topology abstraction provided by each PNC to the MDSC may not have and, therefore,
   that that different PNCs can provide different topology
   abstractions, it is assumed that:

   o PNC1 provides a topology abstraction which exposes at the entire TE
   information but MPI an
      abstract node and an abstract link for each physical node and
      link within network domain 1

   o PNC2 provides a streamlined TE information. This case would incur
   another action from topology abstraction which exposes at the MDSC's standpoint when provisioning MPI a path.
   The MDSC may make
      single abstract node (representing the whole network domain) with
      abstract links representing only the inter-domain physical links

   o PNC3 provides a path compute request to topology abstraction which exposes at the PNC to verify MPI two
      abstract nodes (AN31 and AN32). They abstract respectively nodes
      S31+S33 and nodes S32+S34. At the
   feasibility of MPI, only the estimated path before making abstract nodes
      should be reported: the final
   provisioning request mapping between the abstract nodes (AN31
      and AN32) and the physical nodes (S31, S32, S33 and S34) should
      be done internally by the PNC.

   The MDSC should be capable to stitch together each abstracted
   topology to build its own view of the PNC, as outlined in [Path-Compute].

   Topology multi-domain network topology.
   The process may require suitable oversight, including administrative
   configuration and trust models, but this is out of scope for this
   document.

   A method and process for topology abstraction for the CMI is for further study (to
   required, and will be
   addressed discussed in a future revisions revision of this document).
   document.

4.3. Service Configuration

   In the following use cases, scenarios, it is assumed that the Multi Domain Service Coordinator
   (MDSC) needs to be CNC is capable to
   request service connectivity from the
   transport Physical Network Controller (PNC) MDSC to support IP routers
   connectivity.

   The type of services could depend of the type of physical links
   (e.g. OTN link, ETH link or SDH link) between the routers and
   transport network.

   As described in section 4.1.1, the

   The control of different adaptations inside IP routers, C-Ri (PKT
   -> foo) and C-Rj (foo -> PKT), are assumed to be performed by means
   that are not under the control of, and not visible to, transport PNC. the MDSC nor
   to the PNCs. Therefore, these mechanisms are outside the scope of
   this document.

4.3.1. ODU Transit

   This use case assumes

   It is just assumed that the physical links interconnecting the IP
   routers and CNC is capable to request the proper
   configuration of the different adaptation functions inside the
   customer's IP routers, by means which are outside the scope of this
   document.

4.3.1. ODU Transit

   The physical links interconnecting the IP routers and the transport
   network are can be OTN links. The In this case, the physical/optical interconnection
   interconnections below the ODU layer is are supposed to be pre-configured pre-
   configured and not exposed at the MPI to the MDSC.

   To setup a 10Gb IP link between C-R1 to C-R3, and C-R5, an ODU2 end-to-end
   data plane connection needs to be created between C-R1 and C-R3, C-R5,
   crossing transport nodes S3, S5, S1, S2, S31, S33, S34, S15 and S6. S18
   which belong to different PNC domains.

   The traffic flow between C-R1 and C-R5 can be summarized as:

      C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
      S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])

   It is assumed that the CNC requests, via the CMI, the setup of an
   ODU2 transit service, providing all the information that the MDSC
   needs to understand that it shall setup a multi-domain ODU2 segment
   connection between nodes S3 and S18.

   In case the CNC needs the setup of a 10Gb IP link between C-R1 and
   C-R3 (single-domain service request), the traffic flow between C-R1
   and C-R3 can be summarized as:

      C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]),
      C-R3 (ODU2 -> [PKT])

   The MDSC should be capable via

   Since the MPI to request CNC is unaware of the transport network domains, it
   requests the setup of an ODU2 transit service with enough information in the same way as
   before, regardless the fact the fact that this is a single-domain
   service.

   It is assumed that enable the
   transport PNC to instantiate and control information provided at the CMI is sufficient
   for the MDSC to understand that this is a single-domain service
   request.

   The MDSC can then just request PNC1 to setup a single-domain ODU2
   data plane
   connection segment through connection between nodes S3, S5, S3 and S6.

4.3.2. EPL over ODU

   This use case assumes that the

   The physical links interconnecting the IP routers and the transport
   network are can be Ethernet links.

   In order to

   To setup a 10Gb IP link between C-R1 to C-R3, and C-R5, an EPL service needs
   to be created between C-R1 and C-R3, C-R5, supported by an ODU2 end-to-end
   data plane connection between transport nodes S3 and S6, S18, crossing
   transport
   node S5. nodes S1, S2, S31, S33, S34 and S15 which belong to
   different PNC domains.

   The traffic flow between C-R1 and C-R3 C-R5 can be summarized as:

      C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S5 S1 ([ODU2]),
      S6
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R3 (ETH-> C-R5 (ETH -> [PKT])

   The MDSC should be capable

   It is assumed that the CNC requests, via the MPI to request CMI, the setup of an
   EPL service with enough service, providing all the information that can permit the transport
   PNC MDSC needs to instantiate and control
   understand that it shall coordinate the three PNCs to setup a multi-
   domain ODU2 end-to-end data plane connection through between nodes S3, S5, S6, S3 and S18 as well
   as the configuration of the adaptation functions inside nodes S3 and S6: S3&S6
   S18: S3 (ETH -> [ODU2]), S18 ([ODU2] -> ETH), S18 (ETH -> ODU2) [ODU2])
   and S9&S6 (ODU2 S3 ([ODU2] -> ETH).

   In case the CNC needs the setup of a 10Gb IP link between C-R1 and
   C-R3 (single-domain service request), the traffic flow between C-R1
   and C-R3 can be summarized as:

      C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S5 ([ODU2]),
      S6 ([ODU2] -> ETH), C-R3 (ETH-> [PKT])

   As described in section 4.3.1, the CNC requests the setup of an EPL
   service in the same way as before and the information provided at
   the CMI is sufficient for the MDSC to understand that this is a
   single-domain service request.

   The MDSC can then just request PNC1 to setup a single-domain EPL
   service between nodes S3 and S6. PNC1 can take care of setting up
   the single-domain ODU2 end-to-end connection between nodes S3 and S6
   as well as of configuring the adaptation functions on these edge
   nodes.

4.3.3. Other OTN Client Clients Services

   [ITU-T G.709-2016] G.709] defines mappings of different client layers into
   ODU. Most of them are used to provide Private Line services over
   an OTN transport network supporting a variety of types of physical
   access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand,
   etc.).

   This use case assumes that the

   The physical links interconnecting the IP routers and the transport
   network are can be any one of these possible
   options. types.

   In order to setup a 10Gb IP link between C-R1 to C-R3 and C-R5 using, for
   example STM-64 SDH physical links between the IP routers and the transport
   network, an STM-64 Private Line service needs to be created between
   C-R1 and C-R3, C-R5, supported by an ODU2 end-to-end data plane connection
   between transport nodes S3 and S6, S18, crossing transport node S5. nodes S1, S2,
   S31, S33, S34 and S15 which belong to different PNC domains.

   The traffic flow between C-R1 and C-R3 C-R5 can be summarized as:

      C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 S1 ([ODU2]),
      S6
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> STM-64), C-R3 C-R5 (STM-64 -> [PKT])

   The MDSC should be capable

   As described in section 4.3.2, it is assumed that the CNC is
   capable, via the MPI CMI, to request the setup of an STM-64 Private Line service with enough
   service, providing all the information that can permit the transport PNC MDSC needs to instantiate and control
   coordinate the setup of a multi-domain ODU2 end-to-end connection through nodes S3, S5, S6, as well as
   the adaptation functions inside S3 on the edge nodes.

   In the single-domain case (10Gb IP link between C-R1 and S6: S3&S6 C-R3), the
   traffic flow between C-R1 and C-R3 can be summarized as:

      C-R1 ([PKT] -> STM-64), S3 (STM-64 -> ODU2) and S9&S3 [ODU2]), S5 ([ODU2]),
      S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> PKT). [PKT])

   As described in section 4.3.1, the CNC requests the setup of an STM-
   64 Private Line service in the same way as before and the
   information provided at the CMI is sufficient for the MDSC to
   understand that this is a single-domain service request.

   As described in section 4.3.2, the MDSC could just request PNC1 to
   setup a single-domain STM-64 Private Line service between nodes S3
   and S6.

4.3.4. EVPL over ODU

   This use case assumes that

   When the physical links interconnecting the IP routers and the
   transport network are Ethernet links and links, it is also possible that
   different Ethernet services (e.g, (e.g., EVPL) can share the same physical
   link using different VLANs.

   In order to

   To setup two 1Gb IP links between C-R1 to C-R3 and between C-R1 and C-R4,
   C-R5, two EVPL services need to be created, supported by two ODU0
   end-to-end connections respectively between S3 and S6, crossing
   transport node S5, and between S3 and S2, S18, crossing transport node S1. nodes
   S1, S2, S31, S33, S34 and S15 which belong to different PNC domains.

   Since the two EVPL services are sharing the same Ethernet physical
   link between C-R1 and S3, different VLAN IDs are associated with
   different EVPL services: for example example, VLAN IDs 10 and 20
   respectively.

   The traffic flow between C-R1 and C-R3 C-R5 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S5 S1 ([ODU0]),
      S6
      S2 ([ODU0]), S31 ([ODU0]), S33 ([ODU0]), S34 ([ODU0]),
      S15 ([ODU0]), S18 ([ODU0] -> VLAN), C-R3 C-R5 (VLAN -> [PKT])

   The traffic flow between C-R1 and C-R4 C-R3 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S1 S5 ([ODU0]),
      S2
      S6 ([ODU0] -> VLAN), C-R4 C-R3 (VLAN -> [PKT])

   The MDSC should be capable via the MPI

   As described in section 4.3.2, it is assumed that the CNC is
   capable, via the CMI, to request the setup of these EVPL services with enough services,
   providing all the information that can permit the transport
   PNC MDSC needs to understand that
   it need to request PNC1 to instantiate setup an EVPL service between nodes S3
   and control S6 (single-domain service request) and it also needs to
   coordinate the setup of a multi-domain ODU0 end-to-end data plane
   connections connection between nodes
   S3 and S16 as well as the adaptation functions on the boundary
   nodes: S3&S2&S6 (VLAN -> ODU0) and S3&S2&S6 (ODU0 -> VLAN). these edge nodes.

4.3.5. EVPLAN and EVPTree Services

   This use case assumes that

   When the physical links interconnecting the IP routers and the
   transport network are Ethernet links links, multipoint Ethernet services
   (e.g, EPLAN and EPTree) can also be supported. It is also possible
   that
   different multiple Ethernet services (e.g., (e.g, EVPL, EVPLAN and EVPTree) can
   share the same physical link using different VLANs.

   Note - it is assumed that EPLAN and EPTree services can be supported
   by configuring EVPLAN and EVPTree with port mapping.

   Since this EVPLAN/EVPTree service can share the same Ethernet
   physical links between IP routers and transport nodes (e.g., with
   the EVPL services described in section 4.3.4), a different VLAN ID
   (e.g., 30) can be associated with this EVPLAN/EVPTree service.

   In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R4, C-R5, an
   EVPLAN/EVPTree service needs to be created, supported by two ODUflex
   end-to-end connections respectively between S3 and S6, crossing
   transport node S5, and between S3 and S2, S18, crossing transport node
   S1.

   In order nodes
   S1, S2, S31, S33, S34 and S15 which belong to support this EVPLAN/EVPTree service, some Ethernet different PNC domains.

   Some MAC Bridging capabilities are also required on some nodes at
   the edge of the transport network: for example Ethernet Bridging
   capabilities can be configured in nodes S3 and S6 but not in node S2.

   Since this EVPLAN/EVPTree service can share the same Ethernet
   physical links between IP routers and transport nodes (e.g., with
   the EVPL services described in section 4.3.4), a different VLAN ID
   (e.g., 30) can be associated with this EVPLAN/EVPTree service.

   In order to support an EVPTree service instead of an EVPLAN,
   additional configuration of the Ethernet Bridging capabilities on
   the nodes at the edge of the transport network is required.

   The S6:

   o MAC bridging function Bridging in node S3 is needed to select, based on the MAC
      Destination Address, whether the received Ethernet frames form C-R1 should be sent
      forwarded to C-R1 or to the ODUflex terminating on node S6 or to
      the other ODUflex terminating on node S2.

   The S18;

   o MAC bridging function in node S6 is needed to select, based on
      the MAC Destination Address, whether the received Ethernet frames received
   from the ODUflex
      should be set sent to C-R2 or C-R3, as well as whether
   the Ethernet frames received from C-R2 (or C-R3) should be sent to C-R3 (or C-R2) or to the ODUflex.

   For example, ODUflex terminating
      on node S3.

   In order to support an EVPTree service instead of an EVPLAN,
   additional configuration of the Ethernet Bridging capabilities on
   the nodes at the edge of the transport network is required.

   The traffic flow flows between C-R1 and C-R3, between C-R3 and C-R5 and
   between C-R1 and C-R5 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]),
      S5 ([ODUflex]), S6 ([ODUflex] -> [MAC] -> VLAN),
      C-R3 (VLAN -> [PKT])

   The MAC bridging function in node S3 is also needed to select, based
   on the MAC Destination Address, whether the Ethernet frames one
   ODUflex should be sent to C-R1 or to the other ODUflex.

   For example, the traffic flow between C-R3 and C-R4 can be
   summarized as:

      C-R3 ([PKT] -> VLAN), S6 (VLAN -> [MAC] -> [ODUflex]),
      S5 ([ODUflex]), S3 ([ODUflex] -> [MAC] -> [ODUflex]),
      S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]),
      S33 ([ODUflex]), S34 ([ODUflex]),
      S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R4 C-R5 (VLAN -> [PKT])

   In node S2 there is no need for any MAC bridging function since all
   the Ethernet frames received from C-R4 should be sent to the ODUflex
   toward S3 and viceversa.

   The traffic flow between C-R1 and C-R4 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]),
      S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]),
      S33 ([ODUflex]), S34 ([ODUflex]),
      S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R4 C-R5 (VLAN -> [PKT])

   The MDSC should be capable

   As described in section 4.3.2, it is assumed that the CNC is
   capable, via the MPI CMI, to request the setup of this EVPLAN/EVPTree services with enough
   service, providing all the information that can permit the
   transport PNC MDSC needs to
   understand that it need to request PNC1 to instantiate setup an ODUflex
   connection between nodes S3 and control S6 (single-domain service request)
   and it also needs to coordinate the setup of a multi-domain ODUflex end-to-end data
   plane connections
   connection between nodes S3 and S16 as well as the Ethernet Bridging MAC bridging and
   the adaptation functions on the boundary nodes: S3&S6 (VLAN -> these edge nodes.

   In case the CNC needs the setup of an EVPLAN/EVPTree service only
   between C-R1, C-R2 and C-R3 (single-domain service request), it
   would request the setup of this service in the same way as before
   and the information provided at the CMI is sufficient for the MDSC
   to understand that this is a single-domain service request.

   The MDSC can then just request PNC1 to setup a single-domain
   EVPLAN/EVPTree service between nodes S3 and S6. PNC1 can take care
   of setting up the single-domain ODUflex end-to-end connection
   between nodes S3 and S6 as well as of configuring the MAC -> ODU2), S3&S6
   (ODU2 -> ETH -> VLAN), S2 (VLAN -> ODU2) bridging
   and S2 (ODU2 -> VLAN). the adaptation functions on these edge nodes.

4.3.6. Dynamic Service Configuration

   Given the service established in the previous sections, there is a
   demand for an update of some service characteristics. A
   straightforward approach would be terminate the current service and
   replace with a new one. Another more advanced approach would be
   dynamic configuration, in which case there will be no interruption
   for the connection.

   An example application would be updating the SLA information for a
   certain connection. For example, an ODU transit connection is set up
   according to section 4.3.1, with the corresponding SLA level of 'no
   protection'. After the establishment of this connection, the user
   would like to enhance this service by providing a restoration after
   potential failure, and a request is generated on the CMI. In this
   case, after receiving the request, the MDSC would need to send an
   update message to the PNC, changing the SLA parameters in TE Tunnel
   model. Then the connection characteristic would be changed by PNC,
   and a notification would be sent to MDSC for acknowledgement.

4.4. Multi-functional Multi-function Access Links

   This use case assumes that some

   Some physical links interconnecting the IP routers and the transport
   network can be configured in different modes, e.g., as OTU2 or STM-64 STM-
   64 or 10GE.

   This configuration can be done a-priori by means outside the scope
   of this document. In this case, these links will appear at the MPI
   either as an ODU Link or as an a STM-64 Link or as a 10GE Link
   (depending on the a-priori configuration) and will be controlled at
   the MPI as discussed in section 4.3.

   It is also possible not to configure these links a-priori and give
   the control to the MPI to decide, based on the service
   configuration, how to configure it.

   For example, if the physical link between C-R1 and S3 is a multi-
   functional access link while the physical links between C-R3 C-R7 and S6 S31
   and between C-R4 C-R5 and S2 S18 are STM-64 and 10GE physical links
   respectively, it is possible at the MPI to configure either an STM-
   64 STM-64 Private
   Line service between C-R1 and C-R3 C-R7 or an EPL service between C-R1
   and C-R4. C-R5.

   The traffic flow between C-R1 and C-R3 C-R7 can be summarized as:

      C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 S1 ([ODU2]),
      S6
      S2 ([ODU2]), S31 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT])

   The traffic flow between C-R1 and C-R4 C-R5 can be summarized as:

      C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R4 (ETH-> C-R5 (ETH -> [PKT])

   The MDSC should be capable

   As described in section 4.3.2, it is assumed that the CNC is
   capable, via the MPI CMI, to request the setup of either an STM-64 Private
   Line service with enough between C-R1 and C-R7 or an EPL service between C-R1
   and C-R5, providing all the information that can permit the transport
   PNC MDSC needs to instantiate and control
   understand that it need to coordinate the setup of a multi-domain
   ODU2 end-to-end data plane
   connection connection, either between nodes S3 and S31, or between nodes
   S3 and S18, as well as the adaptation functions inside S3 on these edge nodes,
   and S2 in particular whether the multi-function access link on between
   C-R1 and S3 should operate as an STM-64 or
   S6. as a 10GE link.

4.5. Protection Requirements and Restoration Configuration

   Protection switching provides a pre-allocated survivability
   mechanism, typically provided via linear protection methods and
   would be configured to operate as 1+1 unidirectional (the most
   common OTN protection method), 1+1 bidirectional or 1:n
   bidirectional. This ensures fast and simple service survivability.

   Restoration methods would provide capability to reroute and restore
   connectivity traffic around network faults, without the network
   penalty imposed with dedicated 1+1 protection schemes.

   This section describes only services which are protected with linear
   protection and with dynamic restoration.

   The MDSC needs to be capable to request the transport PNC coordinate different PNCs to
   configure protection switching when requesting the setup of the
   protected connectivity services described in section 4.3.

   Since in this use case it is assumed that these service examples, switching within the transport
   network domain is performed only in one the OTN ODU layer, also
   protection switching within the transport network domain can only be
   provided at the OTN ODU layer, for all the services defined in
   section 4.3.

   It may be necessary to consider not only protection, but also
   restoration functions in the future. Restoration methods would
   provide capability to reroute and restore connectivity traffic
   around network faults, without the network penalty imposed with
   dedicated 1+1 protection schemes. layer.

4.5.1. Linear Protection

   It is possible (end-to-end)

   In order to protect any service defined in section 4.3 from failures
   within the OTN multi-domain transport domain by configuring network, the MDSC should be
   capable to coordinate different PNCs to configure and control OTN
   linear protection in the data plane between node nodes S3 and node S6. S18.

   It is assumed that the OTN linear protection is configured to with
   1+1 unidirectional protection switching type, as defined in [ITU-T
   G.808.1-2014]
   G.808.1] and [ITU-T G.873.1-2014], G.873.1], as well as in [RFC4427].

   In these scenarios, a working transport entity and a protection
   transport entity, as defined in [ITU-T G.808.1-2014], G.808.1], (or a working LSP
   and a protection LSP, as defined in [RFC4427]) should be configured
   in the data plane, for example: plane.

   Two cases can be considered:

   o In one case, the working and protection transport entities pass
      through the same PNC domains:

         Working transport entity:   S3, S5, S6 S1, S2,
                             S31, S33, S34,
                             S15, S18

         Protection transport entity: S3, S4, S8,
                             S32,
                             S12, S17, S18

   o In another case, the working and protection transport entities
      can pass through different PNC domains:

         Working transport entity:   S3, S5, S7, S6
                             S11, S12, S17, S18

         Protection transport entity: S3, S1, S2,
                             S31, S33, S34,
                             S15, S18

   The Transport PNC PNCs should be capable to report to the MDSC which is the active
   transport entity, as defined in [ITU-T G.808.1-2014], G.808.1], in the data plane.

   Given the fast dynamic of protection switching operations in the
   data plane (50ms recovery time), this reporting is not expected to
   be in real-time.

   It is also worth noting that with unidirectional protection
   switching, e.g., 1+1 unidirectional protection switching, the active
   transport entity may be different in the two directions.

5. Use Case 2: Single-domain with multi-layer

5.1. Reference Network

   The current considerations discussed

4.5.2. Segmented Protection

   To protect any service defined in this document are based on section 4.3 from failures within
   the following reference network:

        - single transport domain: OTN and OCh multi-layer network

   In this use case, the same reference network shown in Figure 1 is
   considered. The only difference is that all the multi-domain transport nodes are network, the MDSC should be capable
   to switch in the ODU as well as in the OCh layer.

   All the physical links within the transport network are therefore
   assumed request each PNC to be OCh links. Therefore, with configure OTN intra-domain protection when
   requesting the exception setup of the access
   links, no ODU internal link exists before an OCh end-to-end ODU2 data plane connection is created within the network.

   The controlling hierarchy is the same as described in Figure 2.

   The interface within the scope of this document is segment.

   If PNC1 provides linear protection, the Transport MPI
   which should be capable to control both working and protection
   transport entities could be:

      Working transport entity:   S3, S1, S2

      Protection transport entity: S3, S4, S8, S2

   If PNC2 provides linear protection, the OTN working and OCh layers.

5.2. Topology Abstractions

   A grey topology type B abstraction is assumed: abstract nodes protection
   transport entities could be:

      Working transport entity:   S15, S18

      Protection transport entity: S15, S12, S17, S18

   If PNC3 provides linear protection, the working and
   links exposed at protection
   transport entities could be:

      Working transport entity:   S31, S33, S34

      Protection transport entity: S31, S32, S34

4.5.3. End-to-End Dynamic restoration

   To restore any service defined in section 4.3 from failures within
   the MPI corresponds 1:1 with OTN multi-domain transport network, the physical MDSC should be capable
   to coordinate different PNCs to configure and control OTN end-to-end
   dynamic Restoration in the data plane between nodes S3 and
   links controlled by node S18.
   For example, the PNC but MDSC can request the PNC abstracts/hides at least
   some optical parameters PNC1, PNC2 and PNC3 to be used within create
   a service with no-protection, MDSC set the OCh layer.

5.3. Service Configuration

   The same end-to-end service scenarios, as described with
   the dynamic restoration.

         Working transport entity:   S3, S1, S2,
                             S31, S33, S34,
                             S15, S18

   When a link failure between S1 and s2 occurred in section 4.3, are also
   applicable network domain 1,
   PNC1 does not restore the tunnel and send the alarm notification to these use cases with
   the only difference that end-to-
   end OCh data plane connections MDSC, MDSC will need to be setup before ODU data
   plane connections.

6. Use Case 3: Multi-domain with single-layer

6.1. Reference Network

   In this perform the end-to-end restoration.

         Restored transport entity:   S3, S4, S8,
                             S12, S15, S18

4.5.4. Segmented Dynamic Restoration

   To restore any service defined in section we focus on a 4.3 from failures within
   the OTN multi-domain reference network with
   homogeneous technologies:

        - multiple transport domains: network, the MDSC should be capable
   to coordinate different PNCs to configure and control OTN networks

   Figure 3 shows segmented
   dynamic Restoration in the network physical topology composed of three data plane between nodes S3 and node S18.

         Working transport network domains providing entity:   S3, S1, S2,
                             S31, S33, S34,
                             S15, S18

   When a link failure between S1 and s2 occurred in network domain 1,
   PNC1 will restore the tunnel and send the alarm or tunnel update
   notification to the MDSC, MDSC will update the restored tunnel.

         Restored transport services entity:   S3, S4, S8, S2
                             S31, S33, S34,
                             S15, S18

   When a link failure between network domain 1 and network domain 2
   occurred, PNC1 and PNC2 will send the alarm notification to the
   MDSC, MDSC will update the restored tunnel.

         Restored transport entity:   S3, S4, S8,
                             S12, S15, S18

   In order to improve the efficiency of recovery, the controller can
   establish a recovery path in a concurrent way. When the recovery
   fails in one domain or one network element, the rollback operation
   should be supported.

   The creation of the recovery path by the controller can use the
   method of "make-before-break", in order to reduce the impact of the
   recovery operation on the services.

4.6. Service Modification and Deletion

   To be discussed in future versions of this document.

4.7. Notification

   To realize the topology update, service update and restoration
   function, following notification type should be supported.

   1.           Object create

   2.           Object delete

   3.           Object state change

   4.           Alarm

   Because there are three types of topology abstraction type defined
   in section Section 4.2., the notification should also be abstracted.
   The PNC and MDSC should coordinate together to determine the
   notification policy, such as when an IP
   customer intra-domain alarm occurred,
   the PNC may not report the alarm but the service state change
   notification to the MDSC.

4.8. Path Computation with Constraint

   It is possible to have constraint during path computation procedure,
   typical cases include IRO/XRO and so on. This information is carried
   in the TE Tunnel model and used when there is a request with
   constraint. Consider the example in section 4.3.1, the request can
   be a Tunnel from C-R1 to C-R5 with an IRO from S2 to S31, then a
   qualified feedback would become:

   C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
   S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
   S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])

   If the request covers the IRO from S8 to S12, then the above path
   would not be qualified, while a possible computation result may be:

   C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
   S8 ([ODU2]), S12 ([ODU2]), S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 ->
   [PKT])

   Similarly, the XRO can be represented by TE tunnel model as well.

   When there is a technology specific network through eight access links:

                ........................
   ..........   :                      :
   :        :   :   Network domain (e.g, OTN), the
   corresponding technology (OTN) model should also be used to specify
   the tunnel information on MPI, with the constraint included in TE
   Tunnel model.

5. YANG Model Analysis

   This section provides a high-level overview of how IETF YANG models
   can be used at the MPIs, between the MDSC and the PNCs, to support
   the scenarios described in section 4.

   Section 5.1 describes the different topology abstractions provided
   to the MDSC by each PNC via its own MPI.

   Section 0 describes how the MDSC can coordinate different requests
   to different PNCs, via their own MPIs, to setup different services,
   as defined in section 4.3.

   Section 5.3 describes how the protection scenarios can be deployed,
   including end-to-end protection and segment protection, for both
   intra-domain and inter-domain scenario.

5.1. YANG Models for Topology Abstraction

   Each PNC reports its respective abstract topology to the MDSC, as
   described in section 4.1.2.

5.1.1. Domain 1   :   .............
   :Customer: Topology Abstraction

   PNC1 provides the required topology abstraction to expose at its MPI
   toward the MDSC (called "MPI1") one TE Topology instance for the ODU
   layer (called "MPI1 ODU Topology"), containing one TE Node (called
   "ODU Node") for each physical node, as shown in Figure 3. below.

                  ..................................
                  :                                :
                  :   ODU Abstract Topology @ MPI  :
   :domain 1:
                  :     S1 -------+        Gotham City Area        :
                  :     Metro Transport Network    :
                  :                                :
                  :    /           \     :   :  domain 3 :   ..........
   :  C-R1 ------- S3 ----- S4    \    :   :           :   :        :
   :        :   :    \        \    S2 --------+        :   :Customer:
   :        :   :     \        \    |  :   :   \       :   :domain 3:
   :        +----+        +----+    :
                  :      S5       \        |  :   :    \      :   :        :
   :  C-R2 ------+    /  \       \    |S1-1    |    |S2-1:
                  :   :    S31 --------- C-R7  :
   :        :   : \  /    \       \        | S1 |--------| S2 |- - - - -(C-R4)
                  :   :   /   \   :   :        :
   :        :   :  S6 ---- S7 ---- S8 ------ S32   S33 ------ C-R8        +----+    S2-2+----+    :
                  :     S1-2/               |S2-3  :
                  : /    S3-2/ Robinson Park  |       |   :      : / \   /   :   :........:
                  :  C-R3 ------+    +----+   +----+      |       |   :   :/   S34    :
   :      :   :..........|.......|...:   /    /
                  :
   :........:              |    |      /:.../.......:    |3 1|    |      |     /    /
                ...........|.......|..../..../...      :
        (C-R1)- - - - -| S3 |---| S4 |      |   /    /   :    ..........      : Network
                  :S3-1+----+   +----+      |       |  /    /    :      :
                  :
                : domain 2 |   S3-4 \        \S4-2   | /    /     :    :Customer:
                :         S11 ---- S12   /      :    :domain 2:
                  :        /          |         \S5-1    \ /      |      :
                  :        +----+     \     |      :
                  :     S13     S14        | S15 ------------- C-R4    |      \S8-3|      :
                  :        |  \   /   \ S5 |       \   |      :
                  :        +----+ Metro  \  |S8-2  :
                :     |   S16
        (C-R2)- - - - -   2/ E  \3 Main   \ |      :
                  :S6-1 \ /3 a E \1 Ring   \|      :
                  :    +----+s-n+----+   +----+    :
                  :    |  /         S17 -- S18 --------- C-R5  :
                :    |t d|    | /             \   /     :    :   |    |S8-1:
                  :    | S6 |---| S7 |---| S8 |- - - - -(C-R5)
                  :    S19 ---- S20 ---- S21 ------------ C-R6    +----+4 2+----+3 4+----+    :
                  :     /                          :
        (C-R3)- - - - -                            :
                  :S6-2                            :
                :...............................:    :........:
                  :................................:

       Figure 3 Reference network Abstract Topology exposed at MPI1 (MPI1 ODU Topology)

   The ODU Nodes in Figure 3 are using the same names as the physical
   nodes to simplify the description of the mapping between the ODU
   Nodes exposed by the Transport PNCs at the MPI and the physical
   nodes in the data plane. This does not
   correspond to the reality of the usage of the topology model, as
   described in section 4.3 of [TE-TOPO], in which renaming by the
   client it is necessary.

   As described in section 4.1.2, it is assumed that the physical links
   between the physical nodes are pre-configured up to the OTU4 trail
   using mechanisms which are outside the scope of this document. PNC1
   exports at MPI1 one TE Link (called "ODU Link") for Use Case each of these
   OTU4 trails.

5.1.2. Domain 2 Grey (Type A) Topology Abstraction

   PNC2 provides the required topology abstraction to expose at its MPI
   towards the MDSC (called "MPI2") only one abstract node (i.e., AN2),
   with only inter-domain and access links, is reported at the MPI2.

5.1.3. Domain 3

   It Grey (Type B) Topology Abstraction

   PNC3 provides the required topology abstraction to expose at its MPI
   towards the MDSC (called "MPI3") only two abstract nodes (i.e., AN31
   and AN32), with internal links, inter-domain links and access links.

5.1.4. Multi-domain Topology Stitching

   As assumed in the beginning of this section, MDSC does not have any
   knowledge of the topologies of each domain until each PNC reports
   its own abstraction topology, so the MDSC needs to merge together
   the abstract topologies provided by different PNCs, at the MPIs, to
   build its own topology view, as described in section 4.3 of [TE-
   TOPO].

   Given the topologies reported from multiple PNCs, the MDSC need to
   stitch the multi-domain topology and obtain the full map of
   topology. The topology of each domain main be in an abstracted shape
   (refer to section 5.2 of [ACTN-Frame] for different level of
   abstraction), while the inter-domain link information must be
   complete and fully configured by the MDSC.

   The inter-domain link information is worth noting that reported to the network MDSC by the two
   PNCs, controlling the two ends of the inter-domain link.

   The MDSC needs to understand how to "stitch" together these inter-
   domain 1 links.

   One possibility is identical to use the
   transport domain shown plug-id information, defined in Figure 1.

                      --------------
                     |    Client    |
                     |  Controller  |
                      --------------
                            |
        ....................|.......................
                            |
                     ----------------
                    |                |
                    | [TE-
   TOPO]: two inter-domain links reporting the same plug-id value can
   be merged as a single intra-domain link within any MDSC      |
                    |                |
                     ----------------
                       /   |    \
                      /    |     \
       ............../.....|......\................
                    /      |       \
                   /   ----------   \
                  /   |   PNC2   |   \
                 /     ----------     \
        ----------         |           \
       |   PNC1   |      -----          \
        ----------     (       )      ----------
            |         (         )    |   PNC3   |
          -----      (  Network  )    ----------
        (       )    (  Domain 2 )        |
       (         )    (         )       -----
      (  Network  )    (       )      (       )
      (  Domain 1 )      -----       (         )
       (         )                  (  Network  )
        (       )                   (  Domain 3 )
          -----                      (         )
                                      (       )
                                        -----

                Figure 4 Controlling Hierarchy for Use Case 3 native
   topology. The value of the reported plug-id information can be
   either assigned by a central network authority, and configured
   within the two PNC domains, or it can be discovered using automatic
   discovery mechanisms (e.g., LMP-based, as defined in [RFC6898]).

   In this section we address case the plug-id values are assigned by a central authority, it
   is under the central authority responsibility to assign unique
   values.

   In case where the CNC controls plug-id values are automatically discovered, the
   customer IP
   information discovered by the automatic discovery mechanisms needs
   to be encoded as a bit string within the plug-id value. This
   encoding is implementation specific but the encoding rules need to
   be consistent across all the PNCs.

   In case of co-existence within the same network of multiple sources
   for the plug-id (e.g., central authority and requests transport connectivity among IP
   routers, via automatic discovery or
   even different automatic discovery mechanisms), it is recommended
   that the CMI, plug-id namespace is partitioned to an MDSC which coordinates, via three MPIs, avoid that different
   sources assign the control of a multi-domain transport network through three PNCs. same plug-id value to different inter-domain
   link. The interfaces encoding of the plug-id namespace within the scope of this document are plug-id value
   is implementation specific but needs to be consistent across all the three MPIs
   while
   PNCs.

   Another possibility is to pre-configure, either in the interface between adjacent PNCs
   or in the CNC and MDSC, the IP routers is out of its
   scope association between the inter-domain link
   identifiers (topology-id, node-id and considerations about tp-id) assigned by the CMI are outside two
   adjacent PNCs to the scope same inter-domain link.

   This last scenario requires further investigation and will be
   discussed in a future version of this document.

6.2. Topology Abstractions

   Each PNC should provide

5.1.5. Access Links

   Access links in Figure 3. are shown as ODU Links: the MDSC a topology abstraction modeling of
   the
   domain's network topology.

   Each PNC provides topology abstraction of its own domain topology
   independently from each access links for other and therefore it access technologies is possible that
   different PNCs provide different types currently an open
   issue.

   The modeling of topology abstractions.

   As the access link in case of non-ODU access technology
   has also an impact on the need to model ODU TTPs and layer
   transition capabilities on the edge nodes (e.g., nodes S2, S3, S6
   and S8 in Figure 3.).

   If, for example, we the physical NE S6 is implemented in a "pizza box",
   the data plane would have only set of ODU termination resources
   (where up to 2xODU4, 4xODU3, 20xODU2, 80xODU1, 160xODU0 and
   160xODUflex can be terminated). The traffic coming from each of the
   10GE access links can be mapped into any of these ODU terminations.

   Instead if, for example, the physical NE S6 can be implemented as a
   multi-board system where access links reside on different/dedicated
   access cards with separated set of ODU termination resources (where
   up to 1xODU4, 2xODU3, 10xODU2, 40xODU1, 80xODU0 and 80xODUflex for
   each resource can be terminated). The traffic coming from one 10GE
   access links can assume that:

   o PNC1 provides be mapped only into the ODU terminations which
   reside on the same access card.

   The more generic implementation option for a white topology abstraction (likewise use physical NE (e.g., S6)
   would be case 1
      described in section 4.2)

   o PNC2 provides a type A grey topology abstraction

   o PNC3 provides is of a type B grey topology abstraction, multi-board system with two
      abstract nodes (AN31 multiple access cards
   with separated sets of access links and AN32). They abstract respectively nodes
      S31+S33 ODU termination resources
   (where up to 1xODU4, 2xODU3, 10xODU2, 40xODU1, 80xODU0 and nodes S32+S34. At
   80xODUflex for each resource can be terminated). The traffic coming
   from each of the MPI, 10GE access links on one access card can be mapped
   only into any of the abstract nodes
      should be reported: ODU terminations which reside on the mapping between same
   access card.

   In the abstract nodes (AN31
      and AN32) and last two cases, only the physical nodes (S31, S32, S33 and S34) should
      be done internally by ODUs terminated on the PNC.

   The MDSC should same access
   card where the access links resides can carry the traffic coming
   from that 10GE access link. Terminated ODUs can instead be capable sent to glue together
   any of the OTU4 interfaces

   In all these different abstract
   topologies cases, terminated ODUs can be sent to build its own view any of the multi-domain network
   topology. This might require proper administrative configuration or
   other mechanisms (to be defined/analysed).

6.3. Service Configuration

   In OTU4
   interfaces assuming the following use cases, it implementation is assumed that based on a non-blocking
   ODU cross-connect.

   If the CNC access links are reported via MPI in some, still to be
   defined, client topology, it is capable possible to
   request service connectivity from report each set of ODU
   termination resources as an ODU TTP within the MDSC ODU Topology of
   Figure 1. and to support IP routers
   connectivity.

   The same service scenarios, as described in section 4.3, are also
   application use either the inter-layer lock-id or the
   transitional link, as described in sections 3.4 and 3.10 of
   [TE-TOPO], to this use cases correlate the access links, in the client
   topology, with the only difference that ODU TTPs, in the two
   IP routers to be interconnected are attached ODU topology, to transport nodes which belong to different PNCs domains and access
   link are under connected to.

5.2. YANG Models for Service Configuration

   The service configuration procedure is assumed to be initiated (step
   1 in Figure 4) at the control CMI from CNC to MDSC. Analysis of the CNC.

   Likewise, CMI
   models is (e.g., L1SM, L2SM, Transport-Service, VN, et al.) is
   outside the service scenarios scope of this document.

   As described in section 4.3, it is assumed that the type of services
   could depend of CMI YANG models
   provides all the type of physical links (e.g. OTN link, ETH link
   or SDH link) between information that allows the customer's routers and MDSC to understand that
   it needs to coordinate the setup of a multi-domain
   transport network and ODU connection
   (or connection segment) and, when needed, also the configuration of
   the different adaptations
   inside IP routers is performed by means that are outside adaptation functions in the scope
   of this document because not under control of and not visible edge nodes belonging to the different
   domains.

                                 |
                                 | {1}
                                 V
                          ----------------
                         |           {2}  |
                         | {3}  MDSC nor to      |
                         |                |
                          ----------------
                           ^     ^      ^
                    {3.1}  |     |      |
                 +---------+     |{3.2} |
                 |               |      +----------+
                 |               V                 |
                 |           ----------            |{3.3}
                 |          |   PNC2   |           |
                 |           ----------            |
                 |               ^                 |
                 V               | {4.2}           |
             ----------          V                 |
            |   PNC1   |       -----               V
             ----------      (Network)        ----------
                 ^          ( Domain 2)      |   PNC3   |
                 | {4.1}   (          _)      ----------
                 V          (        )            ^
               -----       C==========D           | {4.3}
             (Network)    /  (       ) \          V
            ( Domain 1)  /     -----    \       -----
           (           )/                \    (Network)
           A===========B                  \  ( Domain 3)
          / (         )                    \(           )
      AP-1   (       )                      X===========Z
               -----                         (         ) \
                                              (       )   AP-2
                                                -----

                    Figure 4 Multi-domain Service Setup

   As an example, the PNCs. It objective in this section is assumed that the CNC to configure a
   transport service between C-R1 and C-R5. The cross-domain routing is capable
   assumed to be C-R1 <-> S3 <-> S2 <-> S31 <-> S33 <-> S34 <->S15 <->
   S18 <-> C-R5.

   According to
   request the proper configuration of the different client signal type, there is different
   adaptation
   functions inside the customer's IP routers, by means which are
   outside required.

   After receiving such request, MDSC determines the scope of this document.

   It is also assumed that domain sequence,
   i.e., domain 1 <-> domain 2 <-> domain 3, with corresponding PNCs
   and inter-domain links (step 2 in Figure 4).

   As described in [PATH-COMPUTE], the CNC is capable via domain sequence can be
   determined by running the CMI to request MDSC own path computation on the MDSC
   internal topology, defined in section 5.1.4, if and only if the setup of these services with MDSC
   has enough information that
   enable topology information. Otherwise the MDSC can send path
   computation requests to coordinate the different PNCs to instantiate and
   control the ODU2 data plane connection through nodes S3, S1, S2,
   S31, S33, S34, S15 and S18, as well as the adaptation functions
   inside nodes S3 (steps 2.1, 2.2 and S18, when needed.

   As described 2.3
   in section 6.2, the MDSC should have its own view of
   the end-to-end network topology Figure 4) and use it for its own path
   computation to understand that it needs this information to coordinate with PNC1,
   PNC2 and PNC3 determine the setup optimal path
   on its internal topology and control of a multi-domain ODU2 data
   plane connection.

6.3.1. ODU Transit

   In order to setup therefore the domain sequence.

   The MDSC will then decompose the tunnel request into a 10Gb IP link between C-R1 and C-R5, an ODU2 end-
   to-end data plane connection needs be created between C-R1 and C-R5,
   crossing transport nodes S3, S1, S2, S31, S33, S34, S15 few tunnel
   segments via tunnel model (including both TE tunnel model and S18
   which belong to OTN
   tunnel model), and request different PNC domains.

   The traffic flow between C-R1 PNCs to setup each intra-domain
   tunnel segment (steps 3, 3.1, 3.2 and C-R5 3.3 in Figure 4).

   Assume that each intra-domain tunnel segment can be summarized as:

      C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]),
      S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT])

6.3.2. EPL over ODU

   In order set up
   successfully, and each PNC response to setup a 10Gb IP link between C-R1 the MDSC respectively. Based
   on each segment, MDSC will take care of the configuration of both
   the intra-domain tunnel segment and C-R5, an EPL
   service needs inter-domain tunnel via
   corresponding MPI (via TE tunnel model and OTN tunnel model). More
   specifically, for the inter-domain configuration, the ts-bitmap and
   tpn attributes need to be created between C-R1 and C-R5, supported by an
   ODU2 configured using the OTN Tunnel model
   [xxx]. Then the end-to-end data plane connection between transport nodes S3 OTN tunnel will be ready.

   In any case, the access link configuration is done only on the PNCs
   that control the access links (e.g., PNC-1 and
   S18, crossing transport nodes S1, S2, S31, S33, S34 PNC-3 in our example)
   and S15 which
   belong to not on the PNCs of transit domain (e.g., PNC-2 in our example).
   Access link will be configured by MDSC after the OTN tunnel is set
   up. Access configuration is different PNC domains.

   The traffic flow between C-R1 and C-R5 dependent on the different
   type of service. More details can be summarized as:

      C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT])

6.3.3. Other OTN Client Services found in the following
   sections.

5.2.1. ODU Transit Service

   In order this scenario, the access links are configured as ODU Links.

   As described in section 4.3.1, the CNC needs to setup a 10Gb setup an ODU2 end-
   to-end connection, supporting an IP link link, between C-R1 and C-R5 using, for
   example SDH physical links between the IP routers and
   requests via the transport
   network, an STM-64 Private Line service needs CMI to be created between the MDSC the setup of an ODU transit
   service.

   From the topology information described in section 5.1 above, the
   MDSC understands that C-R1 and C-R5, supported by ODU2 end-to-end data plane connection
   between transport nodes S3 and S18, crossing transport nodes S1, S2,
   S31, S33, S34 and S15 which belong is attached to different PNC domains.

   The traffic flow between C-R1 the access link
   terminating on S3-1 LTP in the ODU Topology exposed by PNC1 and that
   C-R5 can be summarized as:

      C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> STM-64), C-R5 (STM-64 -> [PKT])

6.3.4. EVPL over is attached to the access link terminating on AN2-1 LTP in the
   ODU

   In order Topology exposed by PNC2.

   Based on the assumption 0) in section 1.2, MDSC would then request
   the PNC1 to setup two 1Gb IP links an ODU2 (Transit Segment) Tunnel between C-R1 to C-R3 S3-1 and between
   C-R1
   S6-2 LTPs:

   o Source and C-R5, two EVPL services need to be created, supported by
   two ODU0 end-to-end connections respectively between S3 Destination TTPs are not specified (since it is a
      Transit Tunnel)

   o Ingress and S6,
   crossing egress points are indicated in the explicit-route-
      objects of the primary path:

        o The first element of the explicit-route-objects references
          the access link terminating on S3-1 LTP

        o Last element of the explicit-route-objects references the
          access link terminating on S6-2 LTP

   The configuration of the timeslots used by the ODU2 connection
   within the transport node S5, network domain (i.e., on the internal links) is
   a matter of the Transport PNC and between S3 its interactions with the physical
   network elements and S18, crossing therefore is outside the scope of this
   document.

   However, the configuration of the timeslots used by the ODU2
   connection at the edge of the transport network domain (i.e., on the
   access links) needs to take into account not only the timeslots
   available on the physical nodes S1, S2, S31, S33, S34 at the edge of the transport network
   domain (e.g., S3 and S15 which belong to
   different PNC domains.

   The VLAN configuration S6) but also on the devices, outside of the
   transport network domain, connected through these access links is the same as described
   in section 4.3.4.

   The traffic flow between
   (e.g., C-R1 and C-R3 is C-R3).

   Based on the same as described assumption 2) in section 4.3.4.

   The traffic flow between C-R1 and C-R5 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> VLAN), C-R5 (VLAN -> [PKT])

6.3.5. EVPLAN and EVPTree Services

   In order 1.2, the MDSC, when requesting
   the Transport PNC to setup an IP subnet between C-R1, C-R2, C-R3 and C-R7, an
   EVPLAN/EVPTree service needs the (Transit Segment) ODU2 Tunnel, it
   would also configure the timeslots to be created, supported by two ODUflex
   end-to-end connections respectively between S3 and S6, crossing
   transport node S5, and between used on the access links.
   The MDSC can know the timeslots which are available on the edge OTN
   Node (e.g., S3 and S18, crossing transport nodes
   S1, S2, S31, S33, S34 and S15 which belong to different S6) from the OTN Topology information exposed by
   the Transport PNC domains.

   The VLAN configuration at the MPI as well as the timeslots which are
   available on the access links is devices outside of the transport network domain
   (e.g., C-R1 and C-R3), by means which are outside the same as described
   in section 4.3.5.

   The configuration scope of this
   document.

   The Transport PNC performs path computation and sets up the Ethernet Bridging capabilities on ODU2
   cross-connections within the physical nodes S3 S3, S5 and S6 is the same S6, as described shown
   in section 4.3.5 while the
   configuration on node S18 similar to 4.3.1.

   The Transport PNC reports the configuration status of node S2
   described in section 4.3.5.

   The traffic flow between C-R1 the created ODU2 (Transit
   Segment) Tunnel and C-R3 is its path within the same ODU Topology as described shown in
   section 4.3.5.

   The traffic flow between C-R1 and C-R5 can be summarized as:

      C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]),
   Figure 5 below:

                   ..................................
                   :                                :
                   :   ODU Abstract Topology @ MPI  :
                   :                                :
                   :        +----+        +----+    :
                   :        |    |        |    |    :
                   :        | S1 ([ODUflex]), |--------| S2 ([ODUflex]), S31 ([ODUflex]),
      S33 ([ODUflex]), S34 ([ODUflex]),
      S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R5 (VLAN -> [PKT])

6.4. Multi-functional Access Links

   The same considerations of section 4.4 apply with the only
   difference that the |- - - - -(C-R4)
                   :        +----+        +----+    :
                   :         /               |      :
                   :        /                |      :
                   :    +----+   +----+      |      :
                   :    |    |   |    |      |      :
         (C-R1)- - - - -  S3 |---| S4 |      |      :
                   :S3-1 <<== +   +----+     |      :
                   :       =        \        |      :
                   :       = \       \       |      :
                   :       == ---+    \      |      :
                   :        =    |     \     |      :
                   :        = S5 |      \    |      :
                   :        == --+       \   |      :
         (C-R2)- - - - -     =  \         \  |      :
                   :S6-1 \ / =   \         \ |      :
                   :    +--- =   +----+   +----+    :
                   :    |    =   |    |   |    |    :
                   :    | S6 = --| S7 |---| S8 |- - - - -(C-R5)
                   :    +--- =   +----+   +----+    :
                   :     /   =                      :
         (C-R3)- - - - -  <<==                      :
                   :S6-2                            :
                   :................................:

                        Figure 5 ODU2 Transit Tunnel

5.2.2. EPL over ODU data plane connections could be setup across
   multiple PNC domains.

   For example, if Service

   In this scenario, the physical link between C-R1 and S3 is a multi-
   functional access link while the physical links between C-R7 and S31
   and between C-R5 and S18 are STM-64 and 10GE physical links
   respectively, it is possible configured as Ethernet Links.

   As described in section 4.3.2, the CNC needs to configure either setup an STM-64 Private
   Line service EPL
   service, supporting an IP link, between C-R1 and C-R7 or C-R3 and requests
   this service at the CMI to the MDSC.

   MDSC needs to setup an EPL service between C-R1 and C-R5.

   The traffic flow between C-R1 and C-R7 can be summarized as:

      C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT])

   The traffic flow supported
   by an ODU2 end-to-end connection between C-R1 and C-R5 can be summarized as:

      C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]),
      S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]),
      S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT])

6.5. Protection Scenarios

   The MDSC needs to be capable to coordinate different PNCs to
   configure protection switching when requesting the setup of the
   connectivity services and S6.

   As described in section 6.3.

   Since 5.1.5 above, it is not clear in this use case it is assumed that switching within
   how the
   transport network domain is performed only in one layer, also
   protection switching within Ethernet access links between the transport network domain can only be
   provided at and the OTN ODU layer, for all
   IP router, are reported by the services defined PNC to the MDSC.

   If the 10GE physical links are not reported as ODU links within the
   ODU topology information, described in section 6.3.

6.5.1. Linear Protection (end-to-end)

   In order 5.1.1 above, than the
   MDSC will not have sufficient information to know that C-R1 and C-R3
   are attached to nodes S3 and S6.

   Assuming that the MDSC knows how C-R1 and C-R3 are attached to protect any service defined in section 6.3 from failures
   within the OTN multi-domain
   transport network, the MDSC should be
   capable to coordinate different PNCs would request the Transport PNC to configure setup
   an ODU2 end-to-end Tunnel between S3 and control OTN
   linear protection in the data plane S6.

   This ODU Tunnel is setup between two TTPs of nodes S3 and node S18.

   The considerations in section 4.5.1 are also applicable here with S6. In
   case nodes S3 and S6 support more than one TTP, the only difference that MDSC needs should
   decide which TTP to coordinate with different
   PNCs the setup and control of use.

   As discussed in 5.1.5, depending on the OTN linear protection as well as different hardware
   implementations of the working and protection transport entities (working physical nodes S3 and
   protection LSPs).

   Two cases S6, not all the access
   links can be considered.

   In one case, connected to all the working and protection transport entities pass
   through TTPs. The MDSC should therefore
   not only select the same PNC domains:

      Working transport entity:   S3, S1, S2,
                          S31, S33, S34,
                          S15, S18

      Protection transport entity: S3, S4, S8,
                          S32,
                          S12, S17, S18

   In another case, optimal TTP but also a TTP that would allow the working and protection transport entities can
   pass through different PNC domains:

      Working transport entity:   S3, S5, S7,
                          S11, S12, S17, S18

      Protection transport entity: S3, S1, S2,
                          S31, S33, S34,
                          S15, S18

6.5.2. Segmented Protection

   In order
   Tunnel to protect any service defined be used by the service.

   It is assumed that in case node S3 or node S6 supports only one TTP,
   this TTP can be accessed by all the access links.

   Once the ODU2 Tunnel setup has been requested, unless there is a
   one-to-one relationship between the S3 and S6 TTPs and the Ethernet
   access links toward C-R1 and C-R3 (as in the case, described in
   section 6.3 5.1.5, where the Ethernet access links reside on
   different/dedicated access card such that the ODU2 tunnel can only
   carry the Ethernet traffic from failures
   within the OTN multi-domain transport network, only Ethernet access link on the
   same access card where the ODU2 tunnel is terminated), the MDSC should be
   capable also
   needs to request each PNC to configure OTN intra-domain protection
   when requesting the setup of an EPL service from the access links
   on S3 and S6, attached to C-R1 and C-R3, and this ODU2 data plane connection segment.

   If linear protection is used within a domain, Tunnel.

5.2.3. Other OTN Client Services

   In this scenario, the considerations in
   section 4.5.1 access links are also applicable here only for the PNC controlling configured as one of the domain where intra-domain linear protection is provided.

   If PNC1 provides linear protection, OTN
   clients (e.g., STM-64) links.

   As described in section 4.3.3, the working CNC needs to setup an STM-64
   Private Link service, supporting an IP link, between C-R1 and protection
   transport entities could be:

      Working transport entity:   S3, S1, S2

      Protection transport entity: S3, S4, S8, S2

   If PNC2 provides linear protection, the working C-R3
   and protection
   transport entities could be:

      Working transport entity:   S15, S18

      Protection transport entity: S15, S12, S17, S18

   If PNC3 provides linear protection, requests this service at the working CMI to the MDSC.

   MDSC needs to setup an STM-64 Private Link service between C-R1 and protection
   transport entities could be:

      Working transport entity:   S31, S33, S34

      Protection transport entity: S31, S32, S34

7. Use Case 4: Multi-domain
   C-R3 supported by an ODU2 end-to-end connection between S3 and multi-layer

7.1. Reference Network

   The current considerations discussed S6.

   As described in section 5.1.5 above, it is not clear in this document are based on case
   how the following reference network:

        - multiple transport domains: OTN and OCh multi-layer networks

   In this use case, access links (e.g., the reference network shown in Figure 3 is used.
   The only difference is that all STM-N access links) between the
   transport nodes network and the IP router, are capable reported by the PNC to
   switch either in the ODU or
   MDSC.

   The same issues, as described in section 5.2.2, apply here:

   o the OCh layer.

   All the physical links within each transport network domain are
   therefore assumed MDSC needs to be OCh understand that C-R1 and C-R3 are connected,
      thought STM-64 access links, while with S3 and S6

   o the inter-domain links are
   assumed MDSC needs to understand which TTPs in S3 and S6 can be
      accessed by these access links

   o the MDSC needs to configure the private line service from these
      access links through the ODU2 tunnel

5.2.4. EVPL over ODU Service

   In this scenario, the access links are configured as Ethernet links,
   as described in section 6.1 (multi-domain
   with single layer - OTN network).

   Therefore, with 5.2.2 above.

   As described in section 4.3.4, the exception of CNC needs to setup EVPL services,
   supporting IP links, between C-R1 and C-R3, as well as between C-R1
   and C-R4 and requests these services at the access CMI to the MDSC.

   MDSC needs to setup two EVPL services, between C-R1 and inter-domain links,
   no ODU link exists within each domain before an OCh single-domain C-R3, as
   well as between C-R1 and C-R4, supported by ODU0 end-to-end data plane connection is created within the network.

   The controlling hierarchy
   connections between S3 and S6 and between S3 and S2 respectively.

   As described in section 5.1.5 above, it is not clear in this case
   how the Ethernet access links between the transport network and the
   IP router, are reported by the PNC to the MDSC.

   The same issues, as described in Figure 4.

   The interfaces within section 5.1.5 above, apply here:

   o the scope of this document MDSC needs to understand that C-R1, C-R3 and C-R4 are
      connected, thought the three MPIs Ethernet access links, with S3, S6 and S2

   o the MDSC needs to understand which should TTPs in S3, S6 and S2 can be capable
      accessed by these access links

   o the MDSC needs to control both configure the OTN and OCh layers
   within each PNC domain.

7.2. Topology Abstractions

   Each PNC should provide EVPL services from these access
      links through the ODU0 tunnels

   In addition, the MDSC a topology abstraction of its own
   network topology needs to get the information that the access
   links on S3, S6 and S2 are capable to support EVPL (rather than just
   EPL) as described in section 5.2.

   As an example, it well as to coordinate the VLAN configuration, for each EVPL
   service, on these access links (this is assumed that:

   o PNC1 provides a type A grey topology abstraction (likewise in use
      case 2 described similar issue as the
   timeslot configuration on access links discussed in section 5.2)

   o PNC2 provides a type B grey topology abstraction (likewise 4.3.1
   above).

5.3. YANG Models for Protection Configuration

5.3.1. Linear Protection (end-to-end)

   To be discussed in use
      case 3 described future versions of this document.

5.3.2. Segmented Protection

   To be discussed in section 6.2)

   o PNC3 provides future versions of this document.

6. Detailed JSON Examples

6.1. JSON Examples for Topology Abstractions

6.1.1. Domain 1 White Topology Abstraction

   Section 5.1.1 describes how PNC1 can provide a type B grey white topology
   abstraction with two
      abstract nodes, likewise in use case 3 described in to the MDSC via the MPI. Figure 3. is an example of
   such ODU Topology.

   This section 6.2, provides the detailed JSON code describing how this ODU
   Topology is reported by the PNC, using the [TE-TOPO] and hiding [OTN-TOPO]
   YANG models at least some optical parameters to be used within the
      OCh layer, likewise in use case 2 described MPI.

   JSON code "mpi1-otn-topology.json" has been provided at in section 5.2.

7.3. the
   appendix of this document.

6.2. JSON Examples for Service Configuration

   The same

6.2.1. ODU Transit Service

   Section 5.2.1 describes how the MDSC can request PNC1, via the MPI,
   to setup an ODU2 transit service scenarios, as over an ODU Topology described in
   section 6.3, are also
   applicable to these use cases with 5.1.1.

   This section provides the only difference that single-
   domain end-to-end OCh data plane connections needs to be setup
   before ODU data plane connections.

8. Security Considerations

   Typically, OTN networks ensure a high level of security and data
   privacy through hard partitioning of traffic onto isolated circuits.

   There may be additional security considerations applied to specific
   use cases, but common security considerations do exist and these
   must be considered for controlling underlying infrastructure to
   deliver transport services:

   o use of RESCONF and detailed JSON code describing how the need to reuse security between RESTCONF
      components;

   o use
   setup of authentication and policy to govern which transport
      services may this ODU2 transit service can be requested by the user or application;

   o how secure and isolated connectivity may also be requested as an
      element of a service MDSC,
   using the [TE-TUNNEL] and mapped down to [OTN-TUNNEL] YANG models at the OTN level.

9. MPI.

   JSON code "mpi1-odu2-service-config.json" has been provided at in
   the appendix of this document.

6.3. JSON Example for Protection Configuration

   To be added

7. Security Considerations

   This section is for further study

8. IANA Considerations

   This document requires no IANA actions.

10.

9. References

10.1.

9.1. Normative References

   [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for
             Information Exchange between Interconnected Traffic-
             Engineered Networks", BCP 206, RFC 7926, July 2016.

   [RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and
             Restoration) Terminology for Generalized Multi-Protocol
             Label Switching (GMPLS)", RFC 4427, March 2006.

   [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for
             Abstraction and Control of Transport Networks", draft-
             ietf-teas-actn-framework, work in progress.

   [ITU-T G.709-2016] G.709] ITU-T Recommendation G.709 (06/16), "Interfaces for
             the optical transport network", June 2016.

   [ITU-T G.808.1-2014] G.808.1] ITU-T Recommendation G.808.1 (05/14), "Generic
             protection switching - Linear trail and subnetwork
             protection", May 2014.

   [ITU-T G.873.1-2014] G.873.1] ITU-T Recommendation G.873.1 (05/14), "Optical
             transport network (OTN): Linear protection", May 2014.

10.2. Informative References

   [TE-Topo]

   [TE-TOPO] Liu, X. et al., "YANG "YANG Data Model for TE Topologies",
             draft-ietf-teas-yang-te-topo, work in progress.

   [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical
             Transport Network Topology", draft-ietf-ccamp-otn-topo-
             yang, work in progress.

   [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer
             Topology", draft-zheng-ccamp-client-topo-yang, work in
             progress.

   [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic
             Engineering Tunnels and Interfaces", draft-ietf-teas-yang-
             te, work in progress.

   [PATH-COMPUTE] Busi, I., Belotti, S. et al, "Yang model for
             requesting Path Computation", draft-busibel-teas-yang-
             path-computation, work in progress.

   [OTN-TUNNEL]  Zheng, H. et al., "OTN Tunnel YANG Model", draft-
             ietf-ccamp-otn-tunnel-model, work in progress.

   [CLIENT-SVC]  Zheng, H. et al., "A YANG Data Model for TE Topologies",
             draft-ietf-teas-yang-te-topo, Optical
             Transport Network Client Signals", draft-zheng-ccamp-otn-
             client-signal-yang, work in progress.

9.2. Informative References

   [RFC5151] Farrel, A. et al., "Inter-Domain MPLS and GMPLS Traffic
             Engineering --Resource Reservation Protocol-Traffic
             Engineering (RSVP-TE) Extensions", RFC 5151, February
             2008.

   [RFC6898] Li, D. et al., "Link Management Protocol Behavior
             Negotiation and Configuration Modifications", RFC 6898,
             March 2013.

   [RFC8309] Wu, Q. et al., "Service Models Explained", RFC 8309,
             January 2018.

   [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for
             Abstraction and Control of Traffic Engineered Networks",
             draft-zhang-teas-actn-yang, work in progress.

   [Path-Compute] Busi, I., Belotti, S.

   [I2RS-TOPO] Clemm, A. et al., " Yang model "A Data Model for
             requesting Path Computation", draft-busibel-teas-yang-
             path-computation, Network Topologies",
             draft-ietf-i2rs-yang-network-topo, work in progress.

   [ONF TR-527] ONF Technical Recommendation TR-527, "Functional
             Requirements for Transport API", June 2016.

   [ONF GitHub] ONF Open Transport (SNOWMASS)
             https://github.com/OpenNetworkingFoundation/Snowmass-
             ONFOpenTransport

11.

10. Acknowledgments

   The authors would like to thank all members of the Transport NBI
   Design Team involved in the definition of use cases, gap analysis
   and guidelines for using the IETF YANG models at the Northbound
   Interface (NBI) of a Transport SDN Controller.

   The authors would like to thank Xian Zhang, Anurag Sharma, Sergio
   Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar
   Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated
   the work on gap analysis for transport NBI and having provided
   foundations work for the development of this document.

   The authors would like to thank the authors of the TE Topology and
   Tunnel YANG models [TE-TOPO] and [TE-TUNNEL], in particular Igor
   Bryskin, Vishnu Pavan Beeram, Tarek Saad and Xufeng Liu, for their
   support in addressing any gap identified during the analysis work.

   This document was prepared using 2-Word-v2.0.template.dot.

Appendix A.                 Detailed JSON Examples

A.1. JSON Code: mpi1-otn-topology.json

   The JSON code for this use case is currently located on GitHub at:

   https://github.com/danielkinguk/transport-nbi/blob/master/Internet-
   Drafts/Applicability-Statement/01/mpi1-otn-topology.json

A.2. JSON Code:  mpi1-odu2-service-config.json

   The JSON code for this use case is currently located on GitHub at:

   https://github.com/danielkinguk/transport-nbi/blob/master/Internet-
   Drafts/Applicability-Statement/01/mpi1-odu2-service-config.json

Appendix B. Validating a JSON fragment against a YANG Model

   The objective is to have a tool that allows validating whether a
   piece of JSON code is compliant with a YANG model without using a
   client/server.

B.1. DSDL-based approach

   The idea is to generate a JSON driver file (JTOX) from YANG, then
   use it to translate JSON to XML and validate it against the DSDL
   schemas, as shown in Figure 6.

   Useful link: https://github.com/mbj4668/pyang/wiki/XmlJson

                           (2)
               YANG-module ---> DSDL-schemas (RNG,SCH,DSRL)
                      |                  |
                      | (1)              |
                      |                  |
      Config/state  JTOX-file            | (4)
             \        |                  |
              \       |                  |
               \      V                  V
      JSON-file------------> XML-file ----------------> Output
                 (3)

           Figure 6 - DSDL-based approach for JSON code validation

   In order to allow the use of comments following the convention
   defined in section Section 2. without impacting the validation
   process, these comments will be automatically removed from the
   JSON-file that will be validate.

B.2. Why not using a XSD-based approach

   This approach has been analyzed and discarded because no longer
   supported by pyang.

   The idea is to convert YANG to XSD, JSON to XML and validate it
   against the XSD, as shown in Figure 7:

                     (1)
         YANG-module ---> XSD-schema - \       (3)
                                        +--> Validation
         JSON-file------> XML-file ----/
                     (2)

            Figure 7 - XSD-based approach for JSON code validation

   The pyang support for the XSD output format was deprecated in 1.5
   and removed in 1.7.1. However pyang 1.7.1 is necessary to work with
   YANG 1.1 so the process shown in Figure 7 will stop just at step
   (1).

   Authors' Addresses

   Italo Busi (Editor)
   Huawei

   Email: italo.busi@huawei.com

   Daniel King (Editor)
   Lancaster University

   Email: d.king@lancaster.ac.uk

   Haomian Zheng (Editor)
   Huawei

   Email: zhenghaomian@huawei.com

   Yunbin Xu (Editor)
   CAICT

   Email: xuyunbin@ritt.cn

   Yang Zhao
   China Mobile

   Email: zhaoyangyjy@chinamobile.com

   Sergio Belotti
   Nokia

   Email: sergio.belotti@nokia.com

   Gianmarco Bruno
   Ericsson

   Email: gianmarco.bruno@ericsson.com
   Young Lee
   Huawei

   Email: leeyoung@huawei.com

   Victor Lopez
   Telefonica

   Email: victor.lopezalvarez@telefonica.com

   Carlo Perocchio
   Ericsson

   Email: carlo.perocchio@ericsson.com

   Haomian Zheng
   Huawei

   Ricard Vilalta
   CTTC

   Email: zhenghaomian@huawei.com ricard.vilalta@cttc.es