NFSv4 Working Group                                      David L. Black
Internet Draft                                         Stephen Fridella
Expires: August September 2007                                   Jason Glasgow
Intended Status: Proposed Standard                      EMC Corporation
                                                      February 21,
                                                          March 4, 2007

                         pNFS Block/Volume Layout
                    draft-ietf-nfsv4-pnfs-block-02.txt
                    draft-ietf-nfsv4-pnfs-block-03.txt

Status of this Memo

   By submitting this Internet-Draft, each author represents that
   any applicable patent or other IPR claims of which he or she is
   aware have been or will be disclosed, and any of which he or she
   becomes aware will be disclosed, in accordance with Section 6 of
   BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
        http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
        http://www.ietf.org/shadow.html

   This Internet-Draft will expire in August September 2007.

Abstract

   Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access
   file data on the storage used by the NFSv4 server.  This ability to
   bypass the server for data access can increase both performance and
   parallelism, but requires additional client functionality for data
   access, some of which is dependent on the class of storage used.  The
   main pNFS operations draft specifies storage-class-independent
   extensions to NFS; this draft specifies the additional extensions
   (primarily data structures) for use of pNFS with block and volume
   based storage.

Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC-2119 [RFC2119].

Table of Contents

   1. Introduction...................................................3
   2. Block Layout Description.......................................3
      2.1. Background and Architecture...............................3
      2.2. GETDEVICELIST and GETDEVICEINFO...........................4
         2.2.1. Volume Identification................................4
         2.2.2. Volume Topology......................................5
         2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4............8
      2.3. Data Structures: Extents and Extent Lists.................4
         2.2.1. Lists.................8
         2.3.1. Layout Requests and Extent Lists.....................6
         2.2.2. Lists....................10
         2.3.2. Layout Commits.......................................7
         2.2.3. Commits......................................11
         2.3.3. Layout Returns.......................................8
         2.2.4. Returns......................................12
         2.3.4. Client Copy-on-Write Processing......................9
         2.2.5. Processing.....................13
         2.3.5. Extents are Permissions.............................10
         2.2.6. Permissions.............................14
         2.3.6. End-of-file Processing..............................11
      2.3. Volume Identification....................................12 Processing..............................15
      2.4. Crash Recovery Issues....................................15 Issues....................................16
   3. Security Considerations.......................................15 Considerations.......................................16
   4. Conclusions...................................................16 Conclusions...................................................18
   5. IANA Considerations...........................................17 Considerations...........................................18
   6. Revision History..............................................17 History..............................................18
   7. Acknowledgments...............................................18 Acknowledgments...............................................19
   8. References....................................................18 References....................................................19
      8.1. Normative References.....................................18 References.....................................19
      8.2. Informative References...................................18 References...................................20
   Author's Addresses...............................................19 Addresses...............................................20
   Intellectual Property Statement..................................19 Statement..................................20
   Disclaimer of Validity...........................................20 Validity...........................................21
   Copyright Statement..............................................20
   Acknowledgment...................................................20 Statement..............................................21
   Acknowledgment...................................................21

1. Introduction

   Figure 1 shows the overall architecture of a pNFS system:

       +-----------+
       |+-----------+                                 +-----------+
       ||+-----------+                                |           |
       |||           |        NFSv4 + pNFS            |           |
       +||  Clients  |<------------------------------>|   Server  |
        +|           |                                |           |
         +-----------+                                |           |
              |||                                     +-----------+
              |||                                           |
              |||                                           |
              |||                +-----------+              |
              |||                |+-----------+             |
              ||+----------------||+-----------+            |
              |+-----------------|||           |            |
              +------------------+||  Storage  |------------+
                                  +|  Systems  |
                                   +-----------+

                        Figure 1 pNFS Architecture

   The overall approach is that pNFS-enhanced clients obtain sufficient
   information from the server to enable them to access the underlying
   storage (on the Storage Systems) directly.  See the pNFS portion of
   [NFSV4.1] for more details.  This draft is concerned with access from
   pNFS clients to Storage Systems over storage protocols based on
   blocks and volumes, such as the SCSI protocol family (e.g., parallel
   SCSI, FCP for Fibre Channel, iSCSI, SAS).  This class of storage is
   referred to as block/volume storage.  While the Server to Storage
   System protocol is not of concern for interoperability here, it will
   typically also be a block/volume protocol when clients use block/
   volume protocols.

2. Block Layout Description

2.1. Background and Architecture

   The fundamental storage abstraction supported by block/volume storage
   is a storage volume consisting of a sequential series of fixed size
   blocks.  This can be thought of as a logical disk; it may be realized
   by the Storage System as a physical disk, a portion of a physical
   disk or something more complex (e.g., concatenation, striping, RAID,
   and combinations thereof) involving multiple physical disks or
   portions thereof.

   A pNFS layout for this block/volume class of storage is responsible
   for mapping from an NFS file (or portion of a file) to the blocks of
   storage volumes that contain the file.  The blocks are expressed as
   extents with 64 bit offsets and lengths using the existing NFSv4
   offset4 and length4 types.  Clients must be able to perform I/O to
   the block extents without affecting additional areas of storage
   (especially important for writes), therefore extents MUST be aligned
   to 512-byte boundaries, and SHOULD be aligned to the block size used
   by the NFSv4 server in managing the actual filesystem (4 kilobytes
   and 8 kilobytes are common block sizes).  This block size is
   available as an NFSv4 attribute - see Section 11.4 of [NFSV4.1]. the NFSv4.1 layout_blocksize attribute. [draft-ietf-
   nfsv4_minorversion1-08]

   The pNFS operation for requesting a layout (LAYOUTGET) includes the
   "pnfs_layoutiomode4 iomode" argument which indicates whether the
   requested layout is for read-only use or read-write use.  A read-only
   layout may contain holes that are read as zero, whereas a read-write
   layout will contain allocated, but uninitialized un-initialized storage in those
   holes (read as zero, can be written by client).  This draft also
   supports client participation in copy on write by providing both
   read-only and uninitialized un-initialized storage for the same range in a layout.
   Reads are initially performed on the read-only storage, with writes
   going to the uninitialized un-initialized storage.  After the first write that
   initializes the uninitialized un-initialized storage, all reads are performed to
   that now-initialized writeable storage, and the corresponding read-
   only storage is no longer used.

2.2. Data Structures: Extents GETDEVICELIST and Extent Lists

   A pNFS block layout is a list of extents within GETDEVICEINFO

2.2.1. Volume Identification

   Storage Systems such as storage arrays can have multiple physical
   network ports that need not be connected to a flat array of 512-
   byte data blocks common network,
   resulting in a pNFS client having simultaneous multipath access to
   the same storage volume. volumes via different ports on different networks.
   The details of the volume
   topology can networks may not even be determined by using the GETDEVICEINFO or
   GETDEVICELIST operation (see discussion of same technology - for example,
   access to the same volume identification,
   section 2.3 below).  The via both iSCSI and Fibre Channel is
   possible, hence network address are difficult to use for volume
   identification.  For this reason, this pNFS block layout describes identifies
   storage volumes by content, for example providing the individual means to match
   (unique portions of) labels used by volume managers.  Any block
   extents on the pNFS
   system using this layout MUST support a means of content-based unique
   volume identification that make up can be employed via the file.

   enum pnfs_block_extent_state4 data structure
   given here.

   struct pnfs_block_sig_component4 {

     READ_WRITE_DATA  = 0,  /* the data located by this extent is valid
                              for reading and writing.  disk signature component */

     READ_DATA = 1,

      int64_t  sig_offset;    /* byte offset of component
                                 from start of volume if positive
                                 from end of volume if negative*/

      length4  sig_length;    /* byte length of component */

      opaque   contents<>;       /* contents of this component of the data located by this extent
                                    signature (this is valid
                              for reading only; it may not be written. opaque) */

     INVALID_DATA = 2,     /*

   };

   .  Note that the location is valid; opaque "contents" field in the data is
                              invalid. It is
   "pnfs_block_sig_component4" structure MUST NOT be interpreted as a newly (pre-) allocated
                              extent. There is physical space on the
                              volume. */

     NONE_DATA = 3,        /* the location is invalid.
   zero-terminated string, as it may contain embedded zero-valued
   octets.  It is a hole in
                              the file. contains exactly sig_length octets.  There is are no physical space
   restrictions on
                              the volume. */

   };

   struct pnfs_block_extent4 {

     offset4         offset;          /* the starting alignment (e.g., neither sig_offset nor sig_length
   are required to be multiples of 4).  The sig_offset is a signed
   quantity which when positive represents an offset in the
                                         file */

     length4         length;          /* from the size start of
   the extent */

     offset4         storage_offset;  /* the starting volume, and when negative represents an offset in the
                                         volume */

     pnfs_block_extent_state4 es;     /* from the state of this extent */

   };

   struct pnfs_block_layout4 {

      deviceid4          volume;       /* logical volume on which file
                                         is stored. */

      pnfs_block_extent4 extents<>;    /* extents which make up this
                                         layout. */

   };
   The block layout consists of an identifier end of
   the logical volume volume.

   Negative offsets are permitted in order to simplify the client
   implementation on
   which systems where the file device label is stored, followed by found at a list of extents which map fixed
   offset from the
   logical regions end of the file to physical locations on the volume.  The
   "storage_offset" field within each extent identifies a location on If the logical volume described by server uses negative
   offsets to describe the "volume" field in signature, then the layout.
   The client and server MUST
   NOT see different volume sizes.  Negative offsets SHOULD NOT be used
   in systems that dynamically resize volumes unless care is responsible for translating this logical offset into an
   offset on the appropriate underlying SAN logical unit.

   Each extent maps a logical region of taken to
   ensure that the file onto a portion of device label is always present at the
   specified logical volume.  The file_offset, extent_length, and es
   fields for an extent returned offset from the server are always valid. The
   interpretation
   end of the storage_offset field depends on volume as seen by the value clients.

   In the absence of es
   as follows (in increasing order):

   o  READ_WRITE_DATA means that storage_offset is valid, and points a negative offset, imagine a system where the
   client has access to
      valid/initialized data that can be read n volumes and written.

   o  READ_DATA means that storage_offset a file system is valid and points to valid/
      initialized data which can only be read.  Write operations striped across m
   volumes.  If those m disks are
      prohibited; all different sizes, then in the worst
   case, the client may would need to request a read-write layout.

   o  INVALID_DATA means that storage_offset is valid, but points to
      invalid uninitialized data. This data must not be physically read
      from the disk until it has been initialized.  A read request for
      an INVALID_DATA extent must fill the user buffer with zeros. Write
      requests must write whole server-sized n times m blocks in order to
   properly identify the disk; bytes
      not initialized volumes used by the user must be set to zero.  Any write a layout.

   The pNFS client block layout driver uses this volume identification
   to
      storage in map pnfs_block_volume_type4 VOLUME_SIMPLE deviceid4s to it's local
   view of a LUN.

2.2.2. Volume Topology

   The pNFS block server volume topology is expressed as an INVALID_DATA extent changes the written portion arbitrary
   combination of base volume types enumerated in the extent following data
   structures.

   enum pnfs_block_volume_type4 {

      VOLUME_SIMPLE = 0,      /* volume maps to READ_WRITE_DATA; the pNFS client a single LU */

      VOLUME_SLICE  = 1,      /* volume is responsible for
      reporting this change via LAYOUTCOMMIT.

   o  NONE_DATA means that storage_offset a slice of another volume */

      VOLUME_CONCAT = 2,      /* volume is not valid, and a concatenation of multiple
                                 volumes */

      VOLUME_STRIPE = 3       /* volume is striped across multiple
                                 volumes */

   };

   struct pnfs_block_simple_volume_info4 {

      deviceid4                  id;         /* this extent
      may not be used to satisfy write requests. Read requests may be
      satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents
      may be returned by requests for readable extents; they are never
      returned if volume id */

      pnfs_block_sig_component4  ds<MAX_SIG_COMP>;
                                             /* disk signature */

   };

   struct pnfs_block_slice_volume_info4 {

      deviceid4        id;      /* this volume id */

      offset4          start;   /* block-offset of the request was for a writeable extent.

   An extent list lists all relevant extents in increasing order start of the
   file_offset
                                    slice */

      length4          length;  /* length of each extent; any ties slice in blocks */

      deviceid4        volume;  /* volume which is sliced */

   };

   struct pnfs_block_concat_volume_info4 {

      deviceid4         id;
                                   /* this volume id */

      deviceid4         volumes<>;  /* volumes which are broken by increasing order concatenated */
   };

   struct pnfs_block_stripe_volume_info4 {

      deviceid4         id;
                                      /* this volume id */

      length4           stripe_unit;   /* size of the extent state (es).

2.2.1. Layout Requests and Extent Lists

   Each request for a layout specifies at least three parameters:
   offset, desired size, and minimum size.  If the status stripe */

      deviceid4         volumes<>;     /* volumes which are striped
                                         across*/

   };

   union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) {

         case VOLUME_SIMPLE:

               pnfs_block_simple_info4      simple_info;

         case VOLUME_SLICE:

               pnfs_block_slice_volume_info4 slice_info;

         case VOLUME_CONCAT:

               pnfs_block_concat_volume_info4 concat_info;

         case VOLUME_STRIPE:

               pnfs_block_stripe_volume_info4 stripe_info;

         default:

               void;

   };
   struct pnfs_block_deviceaddr4 {

      deviceid4            root_id;    /* id of a request
   indicates success, the extent list returned must meet root volume of the following
   criteria:

   o  A request for a readable (but not writeable) layout returns only
      READ_DATA or NONE_DATA extents (but not INVALID_DATA or
      READ_WRITE_DATA extents).

   o  A request for
                                          hierarchy */

      pnfs_block_volume4   volumes<>;  /* array of volumes */

   };

   The "pnfs_block_deviceaddr4" data structure is a writeable layout returns READ_WRITE_DATA or
      INVALID_DATA extents (but not NONE_DATA extents).  It may also
      return READ_DATA extents only when the offset ranges in those
      extents are also covered by INVALID_DATA extents structure that
   allows arbitrarily complex nested volume structures to permit writes.

   o be encoded.
   The first extent in types of aggregations that are allowed are stripes,
   concatenations, and slices. Note that the list MUST contain volume topology expressed
   in the starting offset.

   o pnfs_block_devidceaddr4 data structure will always resolve to
   a set of pnfs_block_volume_type4 VOLUME_SIMPLE.  The total size array of extents in volumes
   is ordered such that the extent list MUST cover at least root volume is the minimum size last element of the
   array.  Concat, slice and no more than stripe volumes MUST refer to volumes
   defined by lower indexed elements of the desired size.  One exception array.

   The "pnfs_block_deviceaddr4" data structure is allowed: returned by the total size MAY be smaller if only readable extents
      were requested and EOF is encountered.

   o  Extents server
   as the storage-protocol-specific opaque field in the extent list MUST be logically contiguous for "devlist_item4"
   structure returned by a
      read-only layout.  For a read-write layout, the set of writable
      extents (i.e., excluding READ_DATA extents) MUST be logically
      contiguous.  Every READ_DATA extent in a read-write layout MUST be
      covered by an INVALID_DATA extent.  This overlap of READ_DATA successful GETDEVICELIST operation, and
      INVALID_DATA extents is the only permitted extent overlap.

   o  Extents MUST be ordered in
   the list only field returned by starting offset, with
      READ_DATA extents preceding INVALID_DATA extents a successful GETDEVICEINFO operation.
   [draft-ietf-nfsv4-minorversion1-08].

2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4

   The deviceid4 returned in the case of
      equal file_offsets.

2.2.2. Layout Commits

   struct pnfs_block_layoutupdate4 {

     pnfs_block_extent4 commit_list<>;/* list of extents to which now
                                         contain valid data. */

     bool               make_version; /* client requests server to
                                         create copy-on-write image devlist_item4 of
                                         this file. */

   }

   The "pnfs_block_layoutupdate4" structure a successful
   GETDEVICELIST operation is a shorthand id used by to reference the client as whole
   volume topology. Decoding the
   block-protocol specific argument "pnfs_block_deviceaddr4" results in a LAYOUTCOMMIT operation.  The
   "commit_list" field is an extent list covering regions
   flat ordering of 512 byte data blocks mapped to VOLUME_SIMPLE
   deviceid4s. Combined with the file
   layout that were previously in the INVALID_DATA state, but have been
   written by the deviceid4 mapping to a client and should now be considered in the
   READ_WRITE_DATA state.  The es field of each extent LUN
   described in the
   commit_list MUST be set to READ_WRITE_DATA.  Implementers should be
   aware that 2.2.1 Volume Identification, a server may logical volume offset
   can be unable mapped to commit regions at a granularity
   smaller than 512 block on a file-system pNFS client LUN. [draft-ietf-nfsv4-
   minorversion1-08]

2.3. Data Structures: Extents and Extent Lists

   A pNFS block (typically 4KB or 8KB).  As noted
   above, the block-size that the server uses layout is available as an NFSv4
   attribute, and any a list of extents included within a flat array of 512-
   byte data blocks in the "commit_list" MUST be
   aligned to this granularity and have a size that is a multiple logical volume.  The details of
   this granularity.  If the client believes that its actions have moved the end-of-file into volume
   topology can be determined by using the middle GETDEVICEINFO or
   GETDEVICELIST operation (see discussion of a volume identification,
   section 2.2 above).  The block being committed, the
   client MUST write zeroes from the end-of-file to layout describes the end of that individual block before committing
   extents on the block.  Failure to do so may result in
   junk (uninitialized data) appearing in volume that area if make up the file file.

   enum pnfs_block_extent_state4 {

     READ_WRITE_DATA  = 0, /* the data located by this extent is
   subsequently extended valid
                              for reading and writing. */

     READ_DATA = 1,        /* the data located by moving this extent is valid
                              for reading only; it may not be written.
                              */

     INVALID_DATA = 2,     /* the end-of-file.

   The "make_version" field of location is valid; the structure data is
                              invalid. It is a flag that newly (pre-) allocated
                              extent. There is physical space on the client
   may set to request that
                              volume. */

     NONE_DATA = 3,        /* the server create location is invalid. It is a copy-on-write image of
   the file (pNFS clients may be involved hole in this operation - see
   section 2.2.4, below).  In anticipation of this operation
                              the client
   which sets file. There is no physical space on
                              the "make_version" flag in volume. */

   };

   struct pnfs_block_extent4 {

     offset4         file_offset;        /* the LAYOUTCOMMIT operation
   should immediately mark all extents starting offset in the layout that is possesses
   as state READ_DATA.  Future writes to the
                                         file require a new
   LAYOUTGET operation to */

     length4         extent_length;
                                          /* the server with an "iomode" set to
   LAYOUTIOMODE_RW.

2.2.3. Layout Returns size of the extent */

     offset4         storage_offset;  /* the starting offset in the
                                         volume */

     pnfs_block_extent_state4 es;     /* the state of this extent */

   };

   struct pnfs_block_layoutreturn4 pnfs_block_layout4 {

      deviceid4          volume;       /* logical volume on which file
                                         is stored. */

      pnfs_block_extent4 rel_list<>; extents<>;    /* list of extents the client
                                         will no longer use. which make up this
                                         layout. */

   }

   };
   The "rel_list" field is an extent list covering regions block layout consists of a deviceid4, shorthand for the file
   layout that are no longer needed by whole
   topology of the client.  Including extents in logical volume on which the "rel_list" for file is stored, followed
   by a LAYOUTRETURN operation represents an explicit
   release list of resources by the client, usually done for extents which map the purpose logical regions of
   avoiding unnecessary CB_LAYOUTRECALL operations in the future.

   Note that file to
   physical locations on the block/volume layout supports unilateral layout
   revocation. When volume.  The "storage offset" field within
   each extent identifies a layout is unilaterally revoked by location on the server,
   usually due to logical volume described by
   the client's lease timer expiring or "volume" field in the layout.  The client
   failing to return a layout in a timely manner, it is important responsible for
   translating this logical offset into an offset on the sake appropriate
   underlying SAN logical unit.

   Each extent maps a logical region of correctness that any in-flight I/Os that the client
   issued before the layout was revoked are rejected at the storage.
   For the block/volume protocol, this is possible by fencing file onto a client
   with portion of the
   specified logical volume.  The file_offset, extent_length, and es
   fields for an expired layout timer extent returned from the physical storage.  Note,
   however, that server are always valid. The
   interpretation of the granularity storage_offset field depends on the value of this operation es
   as follows (in increasing order):

   o  READ_WRITE_DATA means that storage_offset is valid, and points to
      valid/initialized data that can be read and written.

   o  READ_DATA means that storage_offset is valid and points to valid/
      initialized data which can only be at read.  Write operations are
      prohibited; the
   host/logical-unit level.  Thus, if one of client may need to request a client's layouts read-write layout.

   o  INVALID_DATA means that storage_offset is
   unilaterally revoked by valid, but points to
      invalid un-initialized data. This data must not be physically read
      from the server, disk until it will effectively render
   useless *all* of the client's layouts has been initialized.  A read request for files located on
      an INVALID_DATA extent must fill the
   storage units comprising user buffer with zeros. Write
      requests must write whole server-sized blocks to the logical volume.  This may render useless disk; bytes
      not initialized by the client's layouts for files user must be set to zero.  Any write to
      storage in other filesystems.

2.2.4. Client Copy-on-Write Processing

   Distinguishing the READ_WRITE_DATA and READ_DATA an INVALID_DATA extent types in
   combination with changes the allowed overlap written portion of READ_DATA extents with
   INVALID_DATA extents allows copy-on-write processing
      the extent to be done by
   pNFS clients. In classic NFS, this operation would be done by READ_WRITE_DATA; the
   server.  Since pNFS enables clients to do direct block access, it client is
   useful responsible for clients to participate in copy-on-write operations.  All
   block/volume pNFS clients MUST support
      reporting this copy-on-write processing.

   When a client wishes to write data covered by a READ_DATA extent, it
   MUST have requested a writable layout from the server; change via LAYOUTCOMMIT.

   o  NONE_DATA means that layout
   will contain INVALID_DATA extents storage_offset is not valid, and this extent
      may not be used to cover all the data ranges of
   that layout's READ_DATA extents. More precisely, for any file_offset
   range covered satisfy write requests. Read requests may be
      satisfied by one or more READ_DATA zero-filling as for INVALID_DATA. NONE_DATA extents in a writable layout,
      may be returned by requests for readable extents; they are never
      returned if the server MUST include one or more INVALID_DATA request was for a writeable extent.

   An extent list lists all relevant extents in increasing order of the
   layout that cover the same
   file_offset range. When performing a write
   to such an area of a layout, the client MUST effectively copy the
   data from the READ_DATA extent for each extent; any partial blocks ties are broken by increasing order
   of file_offset
   and range, merge in the changes to be written, extent state (es).

2.3.1. Layout Requests and write Extent Lists

   Each request for a layout specifies at least three parameters: file
   offset, desired size, and minimum size.  If the result
   to status of a request
   indicates success, the INVALID_DATA extent for list returned must meet the blocks following
   criteria:

   o  A request for that file_offset and
   range. That is, if entire blocks of data are to be overwritten by an
   operation, the corresponding READ_DATA blocks need a readable (but not be fetched,
   but any partial-block writes must be merged with data fetched via writeable) layout returns only
      READ_DATA or NONE_DATA extents before storing the result via (but not INVALID_DATA extents.
   For the purposes of this discussion, "entire blocks" and "partial
   blocks" refer to or
      READ_WRITE_DATA extents).

   o  A request for a writeable layout returns READ_WRITE_DATA or
      INVALID_DATA extents (but not NONE_DATA extents).  It may also
      return READ_DATA extents only when the server's file-system block size.  Storing of
   data offset ranges in an those
      extents are also covered by INVALID_DATA extents to permit writes.

   o  The first extent converts in the written portion list MUST contain the starting offset.

   o  The total size of extents in the
   INVALID_DATA extent to a READ_WRITE_DATA extent; all subsequent reads list MUST cover at least
      the minimum size and no more than the desired size.  One exception
      is allowed: the total size MAY be performed from this extent; smaller if only readable extents
      were requested and EOF is encountered.

   o  Extents in the corresponding portion of extent list MUST be logically contiguous for a
      read-only layout.  For a read-write layout, the set of writable
      extents (i.e., excluding READ_DATA extent extents) MUST NOT be used after storing data logically
      contiguous.  Every READ_DATA extent in a read-write layout MUST be
      covered by an INVALID_DATA extent.

   In the LAYOUTCOMMIT operation that normally sends updated layout
   information back to the server, for writable data, some  This overlap of READ_DATA and
      INVALID_DATA extents may be committed as READ_WRITE_DATA extents, signifying that
   the storage at the corresponding storage_offset values has been
   stored into and is now to be considered as valid data to the only permitted extent overlap.

   o  Extents MUST be read.
   READ_DATA extents are not committed to ordered in the server. For list by starting offset, with
      READ_DATA extents that
   the client receives via LAYOUTGET as preceding INVALID_DATA and returns via
   LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that extents in the
   READ_DATA mapping for that extent is no longer valid or necessary for
   that file.

2.2.5. Extents are Permissions case of
      equal file_offsets.

2.3.2. Layout Commits

   struct pnfs_block_layoutupdate4 {

     pnfs_block_extent4 commit_list<>;/* list of extents returned to pNFS clients grant permission to read or
   write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as
   zeroes), READ_WRITE_DATA and INVALID_DATA are read/write,
   (INVALID_DATA reads as zeros, any write converts it which now
                                         contain valid data. */

     bool               make_version; /* client requests server to
   READ_WRITE_DATA).  This
                                         create copy-on-write image of
                                         this file. */

   }

   The "pnfs_block_layoutupdate4" structure is used by the only client means of obtaining
   permission to perform direct I/O to storage devices; as the
   block-protocol specific argument in a pNFS client
   MUST NOT perform direct I/O operations that are not permitted by LAYOUTCOMMIT operation.  The
   "commit_list" field is an extent held by the client.  Client adherence to this rule places list covering regions of the
   pNFS server file
   layout that were previously in control of potentially conflicting storage device
   operations, enabling the server to determine what does conflict and
   how to avoid conflicts INVALID_DATA state, but have been
   written by granting and recalling extents to/from
   clients.

   Block/volume class storage devices are not required to perform read
   and write operations atomically.  Overlapping concurrent read the client and
   write operations to should now be considered in the same data may cause
   READ_WRITE_DATA state.  The es field of each extent in the read
   commit_list MUST be set to return a
   mixture of before-write and after-write data.  Overlapping write
   operations can READ_WRITE_DATA.  Implementers should be worse, as the result could
   aware that a server may be unable to commit regions at a mixture of data
   from the two write operations; data corruption can occur if granularity
   smaller than a file-system block (typically 4KB or 8KB).  As noted
   above, the
   underlying storage block-size that the server uses is striped available as an NFSv4
   attribute, and the operations complete any extents included in
   different orders on different stripes.  A pNFS server can avoid these
   conflicts by implementing a single writer XOR multiple readers
   concurrency control policy when there are multiple clients who wish
   to access the same data.  This policy SHOULD "commit_list" MUST be implemented when
   storage devices do not provide atomicity for concurrent read/write
   and write/write operations
   aligned to the same data.

   If a client makes this granularity and have a layout request size that conflicts with an existing
   layout delegation, the request will be rejected with the error
   NFS4ERR_LAYOUTTRYLATER.  This client is then expected to retry the
   request after a short interval.  During multiple of
   this interval granularity.  If the server
   SHOULD recall client believes that its actions have moved
   the conflicting portion end-of-file into the middle of a block being committed, the layout delegation
   client MUST write zeroes from the client end-of-file to the end of that currently holds it.  This reject-and-retry approach
   does not prevent client starvation when there
   block before committing the block.  Failure to do so may result in
   junk (uninitialized data) appearing in that area if the file is contention for
   subsequently extended by moving the
   layout end-of-file.

   The "make_version" field of the structure is a particular file.  For this reason a pNFS server SHOULD
   implement a mechanism flag that the client
   may set to prevent starvation.  One possibility is request that the server can maintain create a queue copy-on-write image of rejected layout requests.  Each
   new layout request can be checked to see if it conflicts with a
   previous rejected request, and if so,
   the newer request can file (pNFS clients may be
   rejected. Once involved in this operation - see
   section 2.2.4, below).  In anticipation of this operation the original requesting client retries its request,
   its entry in
   which sets the rejected request queue can be cleared, or "make_version" flag in the entry LAYOUTCOMMIT operation
   should immediately mark all extents in the rejected request queue can be removed when it reaches a
   certain age.

   NFSv4 supports mandatory locks and share reservations.  These are
   mechanisms layout that clients can use is possesses
   as state READ_DATA.  Future writes to restrict the set of I/O operations
   that are permissible file require a new
   LAYOUTGET operation to other clients.  Since all I/O operations
   ultimately arrive at the NFSv4 server for processing, the server is
   in a position to enforce these restrictions.  However, with pNFS
   layout delegations, I/Os an "iomode" set to
   LAYOUTIOMODE_RW.

2.3.3. Layout Returns

   struct pnfs_block_layoutreturn4 {

     pnfs_block_extent4 rel_list<>;   /* list of extents the client
                                         will be issued from no longer use. */

   }

   The "rel_list" field is an extent list covering regions of the clients file
   layout that hold are no longer needed by the delegations directly to client.  Including extents in
   the storage devices that host "rel_list" for a LAYOUTRETURN operation represents an explicit
   release of resources by the data.
   These devices have no knowledge client, usually done for the purpose of files, mandatory locks, or share
   reservations, and are not
   avoiding unnecessary CB_LAYOUTRECALL operations in a position to enforce such restrictions.
   For this reason the NFSv4 server MUST NOT grant layout delegations future.

   Note that conflict with mandatory locks or share reservations.  Further,
   if a conflicting mandatory lock request or the block/volume layout supports unilateral layout
   revocation. When a conflicting open request
   arrives at layout is unilaterally revoked by the server,
   usually due to the server MUST recall the part of client's lease timer expiring or the client
   failing to return a layout
   delegation in conflict with a timely manner, it is important for
   the request sake of correctness that any in-flight I/Os that the client
   issued before granting the request.

2.2.6. End-of-file Processing

   The end-of-file location can be changed in two ways: implicitly as layout was revoked are rejected at the result of storage.
   For the block/volume protocol, this is possible by fencing a WRITE or LAYOUTCOMMIT beyond client
   with an expired layout timer from the current end-of-file,
   or explicitly as physical storage.  Note,
   however, that the result granularity of this operation can only be at the
   host/logical-unit level.  Thus, if one of a SETATTR request.  Typically, when a
   file client's layouts is truncated
   unilaterally revoked by an NFSv4 client via the SETATTR call, server, it will effectively render
   useless *all* of the server
   frees any disk blocks belonging to client's layouts for files located on the file which are beyond
   storage units comprising the new
   end-of-file byte, and logical volume.  This may write zeros to the portion of the new end-
   of-file block beyond the new end-of-file byte.  These actions render
   any pNFS layouts which refer to useless
   the blocks that are freed or written
   semantically invalid.  Therefore, client's layouts for files in other filesystems.

2.3.4. Client Copy-on-Write Processing

   Distinguishing the server MUST recall from clients READ_WRITE_DATA and READ_DATA extent types in
   combination with the portions allowed overlap of any pNFS layouts which refer READ_DATA extents with
   INVALID_DATA extents allows copy-on-write processing to blocks that will be
   freed or written done by
   pNFS clients. In classic NFS, this operation would be done by the server before processing the truncate
   request. These recalls may take time to complete; as explained in
   [NFSv4.1], if the server cannot respond
   server.  Since pNFS enables clients to the client SETATTR request
   in a reasonable amount of time, do direct block access, it SHOULD reply is
   useful for clients to the client with
   the error NFS4ERR_DELAY.

   Blocks participate in the INVALID_DATA state which lie beyond the new end-of-file
   block present copy-on-write operations.  All
   block/volume pNFS clients MUST support this copy-on-write processing.

   When a special case.  The server has reserved these blocks
   for use client wishes to write data covered by a pNFS client with READ_DATA extent, it
   MUST have requested a writable layout for the file, but from the
   client has yet server; that layout
   will contain INVALID_DATA extents to commit cover all the blocks, and they are not yet a part data ranges of
   that layout's READ_DATA extents. More precisely, for any file_offset
   range covered by one or more READ_DATA extents in a writable layout,
   the file mapping on disk.  The server MAY free these blocks while
   processing MUST include one or more INVALID_DATA extents in the SETATTR request.  If so,
   layout that cover the server MUST recall any
   layouts from pNFS clients which refer same file_offset range. When performing a write
   to such an area of a layout, the blocks before processing
   the truncate.  If client MUST effectively copy the server does not free
   data from the INVALID_DATA READ_DATA extent for any partial blocks
   while processing the SETATTR request, it need not recall layouts
   which refer only of file_offset
   and range, merge in the changes to be written, and write the INVALID DATA blocks.

   When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond result
   to the current end-of-file, or extended explicitly INVALID_DATA extent for the blocks for that file_offset and
   range. That is, if entire blocks of data are to be overwritten by a SETATTR request, an
   operation, the server corresponding READ_DATA blocks need not recall any portions of be fetched,
   but any pNFS layouts.

2.3. Volume Identification

   Storage Systems such as storage arrays can have multiple physical
   network ports that need not partial-block writes must be connected merged with data fetched via
   READ_DATA extents before storing the result via INVALID_DATA extents.
   For the purposes of this discussion, "entire blocks" and "partial
   blocks" refer to a common network,
   resulting the server's file-system block size.  Storing of
   data in an INVALID_DATA extent converts the written portion of the
   INVALID_DATA extent to a pNFS client having simultaneous multipath access READ_WRITE_DATA extent; all subsequent reads
   MUST be performed from this extent; the corresponding portion of the
   READ_DATA extent MUST NOT be used after storing data in an
   INVALID_DATA extent.

   In the LAYOUTCOMMIT operation that normally sends updated layout
   information back to the same storage volumes via different ports on different networks.
   The networks server, for writable data, some INVALID_DATA
   extents may not even be committed as READ_WRITE_DATA extents, signifying that
   the same technology - for example,
   access to storage at the same volume via both iSCSI corresponding storage_offset values has been
   stored into and Fibre Channel is
   possible, hence network address are difficult now to use for volume
   identification. be considered as valid data to be read.
   READ_DATA extents are not committed to the server. For this reason, this pNFS block layout identifies
   storage volumes by content, extents that
   the client receives via LAYOUTGET as INVALID_DATA and returns via
   LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the
   READ_DATA mapping for example providing that extent is no longer valid or necessary for
   that file.

2.3.5. Extents are Permissions

   Layout extents returned to pNFS clients grant permission to read or
   write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as
   zeroes), READ_WRITE_DATA and INVALID_DATA are read/write,
   (INVALID_DATA reads as zeros, any write converts it to
   READ_WRITE_DATA).  This is the only client means of obtaining
   permission to match
   (unique portions of) labels used by volume managers.  Any block perform direct I/O to storage devices; a pNFS
   system using this layout client
   MUST support a means of content-based unique
   volume identification NOT perform direct I/O operations that can be employed via are not permitted by an
   extent held by the data structure
   given here.

   struct sigComponent {         /*  disk signature component */

      int64_t  sig_offset;       /* byte offset of component
                                    from start of volume if positive
                                    from end of volume if negative */

      length4  sig_length;       /* byte length of component */

      opaque contents<>;         /* contents of client.  Client adherence to this component rule places the
   pNFS server in control of potentially conflicting storage device
   operations, enabling the
                                    signature (this is opaque) */

   };

   enum pnfs_block_volume_type4 {

      VOLUME_SIMPLE = 0,      /* volume maps server to determine what does conflict and
   how to avoid conflicts by granting and recalling extents to/from
   clients.

   Block/volume class storage devices are not required to perform read
   and write operations atomically.  Overlapping concurrent read and
   write operations to the same data may cause the read to return a single LU */

      VOLUME_SLICE = 1,       /* volume is a slice of another volume */

      VOLUME_CONCAT = 2,      /* volume is a concatenation
   mixture of multiple
                                 volumes */

      VOLUME_STRIPE = 3,      /* volume before-write and after-write data.  Overlapping write
   operations can be worse, as the result could be a mixture of data
   from the two write operations; data corruption can occur if the
   underlying storage is striped across and the operations complete in
   different orders on different stripes.  A pNFS server can avoid these
   conflicts by implementing a single writer XOR multiple
                                 volumes */
   };

   struct pnfs_block_slice_volume_info4 {

      offset4          start;   /* block-offset readers
   concurrency control policy when there are multiple clients who wish
   to access the same data.  This policy SHOULD be implemented when
   storage devices do not provide atomicity for concurrent read/write
   and write/write operations to the same data.

   If a client makes a layout request that conflicts with an existing
   layout delegation, the request will be rejected with the error
   NFS4ERR_LAYOUTTRYLATER.  This client is then expected to retry the
   request after a short interval.  During this interval the server
   SHOULD recall the conflicting portion of the start layout delegation from
   the client that currently holds it.  This reject-and-retry approach
   does not prevent client starvation when there is contention for the
   layout of a particular file.  For this reason a pNFS server SHOULD
   implement a mechanism to prevent starvation.  One possibility is that
   the server can maintain a queue of rejected layout requests.  Each
   new layout request can be checked to see if it conflicts with a
   previous rejected request, and if so, the newer request can be
   rejected. Once the original requesting client retries its request,
   its entry in the rejected request queue can be cleared, or the
                                    slice */

      length4          length;  /* length of slice entry
   in blocks */

      deviceid4        volume;  /* volume which is sliced */

   };

   struct pnfs_block_concat_volume_info4 {

      deviceid4         volumes<>;  /* volumes which the rejected request queue can be removed when it reaches a
   certain age.

   NFSv4 supports mandatory locks and share reservations.  These are concatenated */

   };

   struct pnfs_block_stripe_volume_info4 {

      length4           stripe_unit;   /* size
   mechanisms that clients can use to restrict the set of stripe */

      deviceid4         volumes<>;     /* volumes which I/O operations
   that are striped
                                         across*/

   };

   union pnfs_block_deviceaddr4 switch (pnfs_block_volume_type4 type) {

         case VOLUME_SIMPLE:

               pnfs_block_sig_component4    ds<MAX_SIG_COMP>;    /*
   disk signature */

         case VOLUME_SLICE:

               pnfs_block_slice_volume_info4 slice_info;

         case VOLUME_CONCAT:

               pnfs_block_concat_volume_info4 concat_info;

         case VOLUME_STRIPE:

               pnfs_block_stripe_volume_info4 stripe_info;

         default:

               void;

   };

   The "pnfs_block_deviceaddr4" union permissible to other clients.  Since all I/O operations
   ultimately arrive at the NFSv4 server for processing, the server is
   in a recursive structure position to enforce these restrictions.  However, with pNFS
   layout delegations, I/Os will be issued from the clients that
   allows arbitrarily complex nested volume structures hold
   the delegations directly to the storage devices that host the data.
   These devices have no knowledge of files, mandatory locks, or share
   reservations, and are not in a position to enforce such restrictions.
   For this reason the NFSv4 server MUST NOT grant layout delegations
   that conflict with mandatory locks or share reservations.  Further,
   if a conflicting mandatory lock request or a conflicting open request
   arrives at the server, the server MUST recall the part of the layout
   delegation in conflict with the request before granting the request.

2.3.6. End-of-file Processing

   The end-of-file location can be encoded.
   The types changed in two ways: implicitly as
   the result of aggregations that are allowed are stripes,
   concatenations, and slices.  The base case is a volume which maps
   simply to one logical unit in WRITE or LAYOUTCOMMIT beyond the SAN, identified by current end-of-file,
   or explicitly as the
   "sigComponent" structure.  Each SAN logical unit result of a SETATTR request.  Typically, when a
   file is content-
   identified truncated by a an NFSv4 client via the SETATTR call, the server
   frees any disk signature made up of extents within blocks belonging to the file which are beyond the new
   end-of-file byte, and
   contents that must match.  The "pnfs_block_deviceaddr4" union is
   returned by may write zeros to the server as portion of the storage-protocol-specific opaque field
   in new end-
   of-file block beyond the "pnfs_deviceaddr4" structure, in response new end-of-file byte.  These actions render
   any pNFS layouts which refer to the GETDEVICEINFO
   or GETDEVICELIST operations.  Note blocks that are freed or written
   semantically invalid.  Therefore, the opaque "contents" field
   in the "sigComponent" structure server MUST NOT be interpreted as a zero-
   terminated string, as it may contain embedded zero-valued octets.  It
   contains exactly sig_length octets.  There are no restrictions on
   alignment (e.g., neither sig_offset nor sig_length are required to be
   multiples of 4).  The sig_offset is a signed quantity which when
   positive represents an offset recall from clients
   the start portions of any pNFS layouts which refer to blocks that will be
   freed or written by the volume, and when
   negative represents an offset from server before processing the end of truncate
   request. These recalls may take time to complete; as explained in
   [NFSv4.1], if the volume.

   Negative offsets are permitted server cannot respond to the client SETATTR request
   in order a reasonable amount of time, it SHOULD reply to simplify the client
   implementation on systems where the device label is found at a fixed
   offset from with
   the end of error NFS4ERR_DELAY.

   Blocks in the volume.  In INVALID_DATA state which lie beyond the absence of new end-of-file
   block present a negative
   offset, imagine special case.  The server has reserved these blocks
   for use by a system where pNFS client with a writable layout for the file, but the
   client has access yet to n volumes commit the blocks, and they are not yet a part of
   the file system is striped across m volumes.  If those m disks are all
   different sizes, then in mapping on disk.  The server MAY free these blocks while
   processing the worst case, SETATTR request.  If so, the client would need server MUST recall any
   layouts from pNFS clients which refer to
   read n times m the blocks in order to properly identify before processing
   the volumes used
   by a layout. truncate.  If the server uses negative offsets to describe does not free the
   signature, then INVALID_DATA blocks
   while processing the client and server MUST NOT see different volume
   sizes.  Negative offsets SHOULD NOT be used in systems that
   dynamically resize volumes unless care is taken SETATTR request, it need not recall layouts
   which refer only to ensure that the
   device label INVALID DATA blocks.

   When a file is always present at the offset from the end of extended implicitly by a WRITE or LAYOUTCOMMIT beyond
   the
   volume as seen current end-of-file, or extended explicitly by a SETATTR request,
   the clients. server need not recall any portions of any pNFS layouts.

2.4. Crash Recovery Issues

   When the server crashes while the client holds a writable layout, and
   the client has written data to blocks covered by the layout, and the
   blocks are still in the INVALID_DATA state, the client has two
   options for recovery.  If the data that has been written to these
   blocks is still cached by the client, the client can simply re-write
   the data via NFSv4, once the server has come back online.  However,
   if the data is no longer in the client's cache, the client MUST NOT
   attempt to source the data from the data servers.  Instead, it should
   attempt to commit the blocks in question to the server during the
   server's recovery grace period, by sending a LAYOUTCOMMIT with the
   "reclaim" flag set to true. This process is described in detail in
   [NFSv4.1] section 21.42.4.

3. Security Considerations

   Typically, SAN disk arrays and SAN protocols provide access control
   mechanisms (access-logics, lun masking, etc.) which operate at the
   granularity of individual hosts.  The functionality provided by such
   mechanisms makes it possible for the server to "fence" individual
   client machines from certain physical disks---that is to say, to
   prevent individual client machines from reading or writing to certain
   physical disks.  Finer-grained access control methods are not
   generally available.  For this reason, certain security
   responsibilities are delegated to pNFS clients for block/volume
   layouts.  Block/volume storage systems generally control access at a
   volume granularity, and hence pNFS clients have to be trusted to only
   perform accesses allowed by the layout extents they currently hold
   (e.g., and not access storage for files on which a layout extent is
   not held).  In general, the server will not be able to prevent a
   client which holds a layout for a file from accessing parts of the
   physical disk not covered by the layout.  Similarly, the server will
   not be able to prevent a client from accessing blocks covered by a
   layout that it has already returned.  This block-based level of
   protection must be provided by the client software.

   An alternative method of block/volume protocol use is for the storage
   devices to export virtualized block addresses, which do reflect the
   files to which blocks belong.  These virtual block addresses are
   exported to pNFS clients via layouts.  This allows the storage device
   to make appropriate access checks, while mapping virtual block
   addresses to physical block addresses.  In environments where the
   security requirements are such that client-side protection from
   access to storage outside of the layout is not sufficient pNFS
   block/volume storage layouts for pNFS SHOULD NOT be used, unless the
   storage device is able to implement the appropriate access checks,
   via use of virtualized block addresses, or other means.

   This also has implications for some NFSv4 functionality outside pNFS.
   For instance, if a file is covered by a mandatory read-only lock, the
   server can ensure that only readable layouts for the file are granted
   to pNFS clients.  However, it is up to each pNFS client to ensure
   that the readable layout is used only to service read requests, and
   not to allow writes to the existing parts of the file.  Since
   block/volume storage systems are generally not capable of enforcing
   such file-based security, in environments where pNFS clients cannot
   be trusted to enforce such policies, pNFS block/volume storage
   layouts SHOULD NOT be used.

   Access to block/volume storage is logically at a lower layer of the
   I/O stack than NFSv4, and hence NFSv4 security is not directly
   applicable to protocols that access such storage directly.  Depending
   on the protocol, some of the security mechanisms provided by NFSv4
   (e.g., encryption, cryptographic integrity) may not be available, or
   may be provided via different means.  At one extreme, pNFS with
   block/volume storage can be used with storage access protocols (e.g.,
   parallel SCSI) that provide essentially no security functionality.
   At the other extreme, pNFS may be used with storage protocols such as
   iSCSI that provide significant functionality.  It is the
   responsibility of those administering and deploying pNFS with a
   block/volume storage access protocol to ensure that appropriate
   protection is provided to that protocol (physical security is a
   common means for protocols not based on IP).  In environments where
   the security requirements for the storage protocol cannot be met,
   pNFS block/volume storage layouts SHOULD NOT be used.

   When security is available for a storage protocol, it is generally at
   a different granularity and with a different notion of identity than
   NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls
   initiator access to volumes).  The responsibility for enforcing
   appropriate correspondences between these security layers is placed
   upon the pNFS client.  As with the issues in the first paragraph of
   this section, in environments where the security requirements are
   such that client-side protection from access to storage outside of
   the layout is not sufficient, pNFS block/volume storage layouts
   SHOULD NOT be used.

4. Conclusions

   This draft specifies the block/volume layout type for pNFS and
   associated functionality.

5. IANA Considerations

   There are no IANA considerations in this document.  All pNFS IANA
   Considerations are covered in [NFSV4.1].

6. Revision History

   -00: Initial Version as draft-black-pnfs-block-00

   -01: Rework discussion of extents as locks to talk about extents
   granting access permissions.  Rewrite operation ordering section to
   discuss deadlocks and races that can cause problems.  Add new section
   on recall completion.  Add client copy-on-write based on text from
   Craig Everhart.

   -02: Fix glitches in extent state descriptions.  Describe most issues
   as RESOLVED.  Most of Section 3 has been incorporated into the the
   main PNFD draft, add NOTE to that effect and say that it will be
   deleted in the next version of this draft (which should be a draft-
   ietf-nfsv4 draft).  Cleaning up a number of things have been left to
   that draft revision, including the interlocks with the types in the
   main pNFS draft, layout striping support, and finishing the Security
   Considerations section.

   -00: New version as draft-ietf-nfsv4-pnfs-block.  Removed resolved
   operations issues (Section 3).  Align types with main pNFS draft
   (which is now part of the NFSv4.1 minor version draft), add volume
   striping and slicing support.  New operations issues are in Section 3
   - the need for a "reclaim bit" and EOF concerns are the two major
   issues.  Extended and improved the Security Considerations section,
   but it still needs work.  Added 1-sentence conclusion that also still
   needs work.

   -01: Changed definition of pnfs_block_deviceaddr4 union to allow more
   concise representation of aggregated volume structures.  Fixed typos
   to make both pnfs_block_layoutupdate and pnfs_block_layoutreturn
   structures contain extent lists instead of a single extent.  Updated
   section 2.1.6 to remove references to CB_SIZECHANGED. Moved
   description of recovery from "Issues" section to "Block Layout
   Description" section. Removed section 3.2 "End-of-file handling
   issues".  Merged old "block/volume layout security considerations"
   section from previous version of [NFSv4.1] with section 4.  Moved
   paragraph on lingering writes to the section which describes layout
   return.  Removed Issues section (3) as the remaining issues are all
   resolved.

   02: Changed pnfs_deviceaddr4 to deviceaddr4 to match [NFSv4.1].
   Updated section 2.2.2 to clarify that the es fields must be
   READ_WRITE_DATA in pnfs_block_layoutupdate requests.  Updated section
   2.2.5 to specify that data corruption can occur; that requests, not
   the client, are rejected; that server "SHOULD" recall conflicting
   portions of layouts.  Clarified that unilateral revocation may affect
   layouts from other filesystems.  Changed signature offset to be a
   signed quantity to allow for labels at a fixed location from the end
   of a volume.  Changed all data structures to have suffix "4", changed
   extentState4 to pnfs_block_extent_state4 and sigComponent to
   pnfs_block_sig_component4, to conform to [NFSv4.1].

   03: Moved sections GETDEVICELIST and GETDEVICEINFO earlier in
   document for better readability.  Added pnfs_block_simple_volume4
   data structure, and added volume_id fields to all pnfs_block volume
   info data structures.

7. Acknowledgments

   This draft draws extensively on the authors' familiarity with the
   mapping functionality and protocol in EMC's HighRoad system
   [HighRoad].  The protocol used by HighRoad is called FMP (File
   Mapping Protocol); it is an add-on protocol that runs in parallel
   with filesystem protocols such as NFSv3 to provide pNFS-like
   functionality for block/volume storage.  While drawing on HighRoad
   FMP, the data structures and functional considerations in this draft
   differ in significant ways, based on lessons learned and the
   opportunity to take advantage of NFSv4 features such as COMPOUND
   operations.  The design to support pNFS client participation in copy-
   on-write is based on text and ideas contributed by Craig Everhart
   (formerly with IBM).

8. References

8.1. Normative References

   [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119, March 1997.

   [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor
             Version 1", draft-ietf-nfsv4-minorversion1-08.txt, Internet
             Draft, October 2006.

8.2. Informative References

   [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white
              paper, available at:
   http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf
              link checked 29 August 2006.

Author's Addresses

   David L. Black
   EMC Corporation
   176 South Street
   Hopkinton, MA 01748

   Phone: +1 (508) 293-7953
   Email: black_david@emc.com

   Stephen Fridella
   EMC Corporation
   228 South Street
   Hopkinton, MA  01748

   Phone: +1 (508) 249-3528
   Email: fridella_stephen@emc.com

   Jason Glasgow
   EMC Corporation
   32 Coslin Drive
   Southboro, MA  01772

   Phone: +1 (508) 305 8831
   Email: glasgow_jason@emc.com

Intellectual Property Statement

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at ietf-
   ipr@ietf.org.

Disclaimer of Validity

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
   THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
   OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
   THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Copyright Statement

   Copyright (C) The IETF Trust (2007).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

Acknowledgment

   Funding for the RFC Editor function is currently provided by the
   Internet Society.