draft-ietf-nfsv4-pnfs-block-00.txt   draft-ietf-nfsv4-pnfs-block-01.txt 
NFSv4 Working Group David L. Black NFSv4 Working Group David L. Black
Internet Draft Stephen Fridella Internet Draft Stephen Fridella
Expires: June 2006 EMC Corporation Expires: February 28, 2007 EMC Corporation
December 30, 2005 August 30, 2006
pNFS Block/Volume Layout pNFS Block/Volume Layout
draft-ietf-nfsv4-pnfs-block-00.txt draft-ietf-nfsv4-pnfs-block-01.txt
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that By submitting this Internet-Draft, each author represents that
any applicable patent or other IPR claims of which he or she is any applicable patent or other IPR claims of which he or she is
aware have been or will be disclosed, and any of which he or she aware have been or will be disclosed, and any of which he or she
becomes aware will be disclosed, in accordance with Section 6 of becomes aware will be disclosed, in accordance with Section 6 of
BCP 79. BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
skipping to change at page 1, line 34 skipping to change at page 1, line 34
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire in April 2006. This Internet-Draft will expire in February 2007.
Abstract Abstract
Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access
file data on the storage used by the NFSv4 server. This ability to file data on the storage used by the NFSv4 server. This ability to
bypass the server for data access can increase both performance and bypass the server for data access can increase both performance and
parallelism, but requires additional client functionality for data parallelism, but requires additional client functionality for data
access, some of which is dependent on the class of storage used. The access, some of which is dependent on the class of storage used. The
main pNFS operations draft specifies storage-class-independent main pNFS operations draft specifies storage-class-independent
extensions to NFS; this draft specifies the additional extensions extensions to NFS; this draft specifies the additional extensions
skipping to change at page 2, line 14 skipping to change at page 2, line 14
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC-2119 [RFC2119]. document are to be interpreted as described in RFC-2119 [RFC2119].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Background and Architecture....................................3 2. Block Layout Description.......................................3
2.1. Data Structures: Extents, Extent Lists, and Volumes.......4 2.1. Background and Architecture...............................3
2.1.1. Layout Requests and Extent Lists.....................8 2.2. Data Structures: Extents and Extent Lists.................4
2.1.2. Layout Commits.......................................9 2.2.1. Layout Requests and Extent Lists.....................6
2.1.3. Layout Returns.......................................9 2.2.2. Layout Commits.......................................7
2.1.4. Client Copy-on-Write Processing.....................10 2.2.3. Layout Returns.......................................8
2.1.5. Extents are Permissions.............................11 2.2.4. Client Copy-on-Write Processing......................9
2.1.6. End-of-file Processing..............................12 2.2.5. Extents are Permissions.............................10
2.2. Volume Identification....................................14 2.2.6. End-of-file Processing..............................11
3. New Operations Issues.........................................15 2.3. Volume Identification....................................12
3.1. Server Controlling Client Access to Block Devices........15 2.4. Crash Recovery Issues....................................14
3.1.1. Guarantees Provided by Layouts......................15 3. Security Considerations.......................................14
3.1.2. I/O In-flight Issues................................15 4. Conclusions...................................................16
3.1.3. Crash Recovery Issues...............................15 5. IANA Considerations...........................................16
3.2. End-of-File (EOF) Handling issues........................16 6. Revision History..............................................16
3.2.1. Truncation..........................................16 7. Acknowledgments...............................................17
3.2.2. Extension...........................................17 8. References....................................................17
3.2.3. Extension vs. Readable Layouts......................17 8.1. Normative References.....................................17
4. Security Considerations.......................................18 8.2. Informative References...................................18
5. Conclusions...................................................19 Author's Addresses...............................................18
6. IANA Considerations...........................................19 Intellectual Property Statement..................................18
7. Revision History..............................................19 Disclaimer of Validity...........................................19
8. Acknowledgments...............................................20 Copyright Statement..............................................19
9. References....................................................20 Acknowledgment...................................................19
9.1. Normative References.....................................20
9.2. Informative References...................................20
Author's Addresses...............................................21
Intellectual Property Statement..................................21
Disclaimer of Validity...........................................22
Copyright Statement..............................................22
Acknowledgment...................................................22
1. Introduction 1. Introduction
Figure 1 shows the overall architecture of a pNFS system: Figure 1 shows the overall architecture of a pNFS system:
+-----------+ +-----------+
|+-----------+ +-----------+ |+-----------+ +-----------+
||+-----------+ | | ||+-----------+ | |
||| | NFSv4 + pNFS | | ||| | NFSv4 + pNFS | |
+|| Clients |<------------------------------>| Server | +|| Clients |<------------------------------>| Server |
skipping to change at page 3, line 38 skipping to change at page 3, line 38
The overall approach is that pNFS-enhanced clients obtain sufficient The overall approach is that pNFS-enhanced clients obtain sufficient
information from the server to enable them to access the underlying information from the server to enable them to access the underlying
storage (on the Storage Systems) directly. See the pNFS portion of storage (on the Storage Systems) directly. See the pNFS portion of
[NFSV4.1] for more details. This draft is concerned with access from [NFSV4.1] for more details. This draft is concerned with access from
pNFS clients to Storage Systems over storage protocols based on pNFS clients to Storage Systems over storage protocols based on
blocks and volumes, such as the SCSI protocol family (e.g., parallel blocks and volumes, such as the SCSI protocol family (e.g., parallel
SCSI, FCP for Fibre Channel, iSCSI, SAS). This class of storage is SCSI, FCP for Fibre Channel, iSCSI, SAS). This class of storage is
referred to as block/volume storage. While the Server to Storage referred to as block/volume storage. While the Server to Storage
System protocol is not of concern for interoperability here, it will System protocol is not of concern for interoperability here, it will
typically also be a block/volume protocol when clients use typically also be a block/volume protocol when clients use block/
block/volume protocols. volume protocols.
2. Background and Architecture 2. Block Layout Description
2.1. Background and Architecture
The fundamental storage abstraction supported by block/volume storage The fundamental storage abstraction supported by block/volume storage
is a storage volume consisting of a sequential series of fixed size is a storage volume consisting of a sequential series of fixed size
blocks. This can be thought of as a logical disk; it may be realized blocks. This can be thought of as a logical disk; it may be realized
by the Storage System as a physical disk, a portion of a physical by the Storage System as a physical disk, a portion of a physical
disk or something more complex (e.g., concatenation, striping, RAID, disk or something more complex (e.g., concatenation, striping, RAID,
and combinations thereof) involving multiple physical disks or and combinations thereof) involving multiple physical disks or
portions thereof. portions thereof.
A pNFS layout for this block/volume class of storage is responsible A pNFS layout for this block/volume class of storage is responsible
skipping to change at page 4, line 31 skipping to change at page 4, line 33
layout will contain allocated, but uninitialized storage in those layout will contain allocated, but uninitialized storage in those
holes (read as zero, can be written by client). This draft also holes (read as zero, can be written by client). This draft also
supports client participation in copy on write by providing both supports client participation in copy on write by providing both
read-only and uninitialized storage for the same range in a layout. read-only and uninitialized storage for the same range in a layout.
Reads are initially performed on the read-only storage, with writes Reads are initially performed on the read-only storage, with writes
going to the uninitialized storage. After the first write that going to the uninitialized storage. After the first write that
initializes the uninitialized storage, all reads are performed to initializes the uninitialized storage, all reads are performed to
that now-initialized writeable storage, and the corresponding read- that now-initialized writeable storage, and the corresponding read-
only storage is no longer used. only storage is no longer used.
2.1. Data Structures: Extents, Extent Lists, and Volumes 2.2. Data Structures: Extents and Extent Lists
A pNFS block layout is a list of extents within a flat array of 512- A pNFS block layout is a list of extents within a flat array of 512-
byte data blocks known as a volume. A volume may correspond to a byte data blocks known as a volume. A volume may correspond to a
single logical unit in a SAN, or a more complex aggregation of single logical unit in a SAN, or a more complex aggregation of
multiple logical units. The block layout describes both the topology multiple logical units. The details of the volume topology can be
of the volume as well as the individual block extents on the volume determined by using the GETDEVICEINFO or GETDEVICELIST operation (see
that make up the file. Each individual extent MUST be at least 512- discussion of volume identification, section 2.3 below). The block
byte aligned. layout describes the individual block extents on the volume that make
up the file. Each individual extent MUST be at least 512-byte
aligned.
enum extentState4 { enum extentState4 {
READ_WRITE_DATA = 0, /* the data located by this extent is valid READ_WRITE_DATA = 0, /* the data located by this extent is valid
for reading and writing. */ for reading and writing. */
READ_DATA = 1, /* the data located by this extent is valid READ_DATA = 1, /* the data located by this extent is valid
for reading only; it may not be written. for reading only; it may not be written.
*/ */
skipping to change at page 5, line 39 skipping to change at page 5, line 39
length4 length; /* the size of the extent */ length4 length; /* the size of the extent */
offset4 storage_offset; /* the starting offset in the offset4 storage_offset; /* the starting offset in the
volume */ volume */
extentState4 es; /* the state of this extent */ extentState4 es; /* the state of this extent */
}; };
enum pnfs_block_volume_type {
VOLUME_SIMPLE = 0, /* volume maps to a single LU */
VOLUME_SLICE = 1, /* volume is a slice of another volume */
VOLUME_CONCAT = 2, /* volume is a concatenation of multiple
volumes */
VOLUME_STRIPE = 3, /* volume is striped across multiple
volumes */
};
struct pnfs_block_slice_volume_info {
offset4 start; /* block-offset of the start of the
slice */
length4 length; /* length of slice in blocks */
pnfs_block_volume volume; /* volume which is sliced */
};
struct pnfs_block_concat_volume_info {
pnfs_block_volume volumes<>; /* volumes which are concatenated */
};
struct pnfs_block_stripe_volume_info {
length4 stripe_unit; /* size of stripe */
pnfs_block_volume volumes<>; /* volumes which are striped
across*/
};
union pnfs_block_volume switch (pnfs_block_volume_type type) {
case VOLUME_SIMPLE:
pnfs_deviceid4 volume_ID;
case VOLUME_SLICE:
pnfs_block_slice_volume_info slice_info;
case VOLUME_CONCAT:
pnfs_block_concat_volume_info concat_info;
case VOLUME_STRIPE:
pnfs_block_stripe_volume_info stripe_info;
};
struct pnfs_block_layout { struct pnfs_block_layout {
pnfs_block_volume volume; /* topology of the volume on pnfs_deviceid4 volume; /* logical volume on which file
which file is stored. */ is stored. */
pnfs_block_extent extents<>; /* extents which make up this pnfs_block_extent extents<>; /* extents which make up this
layout. */ layout. */
}; };
The block layout consists of an identifier of the logical volume on
The block layout consists of information describing the topology of which the file is stored, followed by a list of extents which map the
the logical volume on which the file is stored, followed by a list of logical regions of the file to physical locations on the volume. The
extents which map the logical regions of the file to physical "storage_offset" field within each extent identifies a location on
locations on the volume. The "pnfs_block_volume" union is a the logical volume described by the "volume" field in the layout.
recursive structure that allows arbitrarily complex nested volume The client is responsible for translating this logical offset into an
structures to be encoded. The types of aggregations that are allowed offset on the appropriate underlying SAN logical unit.
are stripes, concatenations, and slices. The base case is a volume
which maps simply to one logical unit in the SAN, identified by the
"pnfs_deviceid4" (see discussion of volume identification, section
2.2 below). The "storage_offset" field within each extent identifies
a location on the logical volume described by the "volume" field in
the layout. The client is responsible for translating this logical
offset into an offset on the appropriate underlying SAN logical unit.
Each extent maps a logical region of the file onto a portion of the Each extent maps a logical region of the file onto a portion of the
specified logical volume. The file_offset, extent_length, and es specified logical volume. The file_offset, extent_length, and es
fields for an extent returned from the server are always valid. The fields for an extent returned from the server are always valid. The
interpretation of the storage_offset field depends on the value of es interpretation of the storage_offset field depends on the value of es
as follows: as follows:
o READ_WRITE_DATA means that storage_offset is valid, and points to o READ_WRITE_DATA means that storage_offset is valid, and points to
valid/initialized data that can be read and written. valid/initialized data that can be read and written.
skipping to change at page 8, line 15 skipping to change at page 6, line 45
o NONE_DATA means that storage_offset is not valid, and this extent o NONE_DATA means that storage_offset is not valid, and this extent
may not be used to satisfy write requests. Read requests may be may not be used to satisfy write requests. Read requests may be
satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents
are returned by requests for readable extents; they are never are returned by requests for readable extents; they are never
returned if the request was for a writeable extent. returned if the request was for a writeable extent.
The extent list lists all relevant extents in increasing order of the The extent list lists all relevant extents in increasing order of the
file_offset of each extent; any ties are broken by increasing order file_offset of each extent; any ties are broken by increasing order
of the extent state (es). of the extent state (es).
2.1.1. Layout Requests and Extent Lists 2.2.1. Layout Requests and Extent Lists
Each request for a layout specifies at least three parameters: Each request for a layout specifies at least three parameters:
offset, desired size, and minimum size (the desired size is missing offset, desired size, and minimum size. If the status of a request
from the operations draft - see Section 3). If the status of a indicates success, the extent list returned must meet the following
request indicates success, the extent list returned must meet the criteria:
following criteria:
o A request for a readable (but not writeable) layout returns only o A request for a readable (but not writeable) layout returns only
READ_DATA or NONE_DATA extents (but not INVALID_DATA or READ_DATA or NONE_DATA extents (but not INVALID_DATA or
READ_WRITE_DATA extents). READ_WRITE_DATA extents).
o A request for a writeable layout returns READ_WRITE_DATA or o A request for a writeable layout returns READ_WRITE_DATA or
INVALID_DATA extents (but not NONE_DATA extents). It may also INVALID_DATA extents (but not NONE_DATA extents). It may also
return READ_DATA extents only when the offset ranges in those return READ_DATA extents only when the offset ranges in those
extents are also covered by INVALID_DATA extents to permit writes. extents are also covered by INVALID_DATA extents to permit writes.
skipping to change at page 9, line 5 skipping to change at page 7, line 32
read-only layout. For a read-write layout, the set of writable read-only layout. For a read-write layout, the set of writable
extents (i.e., excluding READ_DATA extents) MUST be logically extents (i.e., excluding READ_DATA extents) MUST be logically
contiguous. Every READ_DATA extent in a read-write layout MUST be contiguous. Every READ_DATA extent in a read-write layout MUST be
covered by an INVALID_DATA extent. This overlap of READ_DATA and covered by an INVALID_DATA extent. This overlap of READ_DATA and
INVALID_DATA extents is the only permitted extent overlap. INVALID_DATA extents is the only permitted extent overlap.
o Extents MUST be ordered in the list by starting offset, with o Extents MUST be ordered in the list by starting offset, with
READ_DATA extents preceding INVALID_DATA extents in the case of READ_DATA extents preceding INVALID_DATA extents in the case of
equal file_offsets. equal file_offsets.
2.1.2. Layout Commits 2.2.2. Layout Commits
struct pnfs_block_layoutupdate { struct pnfs_block_layoutupdate {
pnfs_block_extent commit_list; /* list of extents to which now pnfs_block_extent commit_list<>; /* list of extents to which now
contain valid data. */ contain valid data. */
bool make_version; /* client requests server to bool make_version; /* client requests server to
create copy-on-write image of create copy-on-write image of
this file. */ this file. */
} }
The "pnfs_block_layoutupdate" structure is used by the client as the The "pnfs_block_layoutupdate" structure is used by the client as the
block-protocol specific argument in a LAYOUTCOMMIT operation. The block-protocol specific argument in a LAYOUTCOMMIT operation. The
skipping to change at page 9, line 37 skipping to change at page 8, line 16
included in the "commit_list" must be aligned on this granularity. included in the "commit_list" must be aligned on this granularity.
If the client believes that its actions have moved the end-of-file If the client believes that its actions have moved the end-of-file
into the middle of a block being committed, the client MUST write into the middle of a block being committed, the client MUST write
zeroes from the end-of-file to the end of that block before zeroes from the end-of-file to the end of that block before
committing the block. Failure to do so may result in junk committing the block. Failure to do so may result in junk
(uninitialized data) appearing in that area if the file is (uninitialized data) appearing in that area if the file is
subsequently extended by moving the end-of-file. subsequently extended by moving the end-of-file.
The "make_version" field of the structure is a flag that the client The "make_version" field of the structure is a flag that the client
may set to request that the server create a copy-on-write image of may set to request that the server create a copy-on-write image of
the file (see section 2.1.4, below). In anticipation of this the file (pNFS clients may be involved in this operation - see
operation the client which sets the "make_version" flag in the section 2.2.4, below). In anticipation of this operation the client
LAYOUTCOMMIT operation should immediately mark all extents in the which sets the "make_version" flag in the LAYOUTCOMMIT operation
layout that is possesses as state READ_DATA. Future writes to the should immediately mark all extents in the layout that is possesses
file require a new LAYOUTGET operation to the server with an "iomode" as state READ_DATA. Future writes to the file require a new
set to LAYOUTIOMODE_RW. LAYOUTGET operation to the server with an "iomode" set to
LAYOUTIOMODE_RW.
2.1.3. Layout Returns 2.2.3. Layout Returns
struct pnfs_block_layoutreturn { struct pnfs_block_layoutreturn {
pnfs_block_extent rel_list; /* list of extents the client pnfs_block_extent rel_list<>; /* list of extents the client
will no longer use. */ will no longer use. */
} }
The "rel_list" field is an extent list covering regions of the file The "rel_list" field is an extent list covering regions of the file
layout that are no longer needed by the client. Including extents in layout that are no longer needed by the client. Including extents in
the "rel_list" for a LAYOUTRETURN operation represents an explicit the "rel_list" for a LAYOUTRETURN operation represents an explicit
release of resources by the client, usually done for the purpose of release of resources by the client, usually done for the purpose of
avoiding unnecessary CB_LAYOUTRECALL operations in the future. avoiding unnecessary CB_LAYOUTRECALL operations in the future.
2.1.4. Client Copy-on-Write Processing Note that the block/volume layout supports unilateral layout
revocation. When a layout is unilaterally revoked by the server,
usually due to the client's lease timer expiring or the client
failing to return a layout in a timely manner, it is important for
the sake of correctness that any in-flight I/Os that the client
issued before the layout was revoked are rejected at the storage.
For the block/volume protocol, this is possible by fencing a client
with an expired layout timer from the physical storage. Note,
however, that the granularity of this operation can only be at the
host/logical-unit level. Thus, if one of a client's layouts is
unilaterally revoked by the server, it will effectively render
useless *all* of the client's layouts for files in the same
filesystem.
Distinguishing the READ_WRITE_DATA and READ_DATA extent types 2.2.4. Client Copy-on-Write Processing
combined with the allowed overlap of READ_DATA extents with
Distinguishing the READ_WRITE_DATA and READ_DATA extent types in
combination with the allowed overlap of READ_DATA extents with
INVALID_DATA extents allows copy-on-write processing to be done by INVALID_DATA extents allows copy-on-write processing to be done by
pNFS clients. In classic NFS, this operation would be done by the pNFS clients. In classic NFS, this operation would be done by the
server. Since pNFS enables clients to do direct block access, it is server. Since pNFS enables clients to do direct block access, it is
useful for clients to participate in copy-on-write operations. All useful for clients to participate in copy-on-write operations. All
block/volume pNFS clients MUST support this copy-on-write processing. block/volume pNFS clients MUST support this copy-on-write processing.
When a client wishes to write data covered by a READ_DATA extent, it When a client wishes to write data covered by a READ_DATA extent, it
MUST have requested a writable layout from the server; that layout MUST have requested a writable layout from the server; that layout
will contain INVALID_DATA extents to cover all the data ranges of will contain INVALID_DATA extents to cover all the data ranges of
that layout's READ_DATA extents. More precisely, for any file_offset that layout's READ_DATA extents. More precisely, for any file_offset
range covered by one or more READ_DATA extents in a writable layout, range covered by one or more READ_DATA extents in a writable layout,
the server MUST include one or more INVALID_DATA extents in the the server MUST include one or more INVALID_DATA extents in the
layout that cover the same file_offset range. The client MUST layout that cover the same file_offset range. When performing a write
logically copy the data from the READ_DATA extent for any partial to such an area of a layout, the client MUST effectively copy the
blocks of file_offset and range, merge in the changes to be written, data from the READ_DATA extent for any partial blocks of file_offset
and write the result to the INVALID_DATA extent for the blocks for and range, merge in the changes to be written, and write the result
that file_offset and range. That is, if entire blocks of data are to to the INVALID_DATA extent for the blocks for that file_offset and
be overwritten by an operation, the corresponding READ_DATA blocks range. That is, if entire blocks of data are to be overwritten by an
need not be fetched, but any partial-block writes must be merged with operation, the corresponding READ_DATA blocks need not be fetched,
data fetched via READ_DATA extents before storing the result via but any partial-block writes must be merged with data fetched via
INVALID_DATA extents. For the purposes of this discussion, "entire READ_DATA extents before storing the result via INVALID_DATA extents.
blocks" and "partial blocks" refer to the server's file-system block For the purposes of this discussion, "entire blocks" and "partial
size. Storing of data in an INVALID_DATA extent converts the written blocks" refer to the server's file-system block size. Storing of
portion of the INVALID_DATA extent to a READ_WRITE_DATA extent; all data in an INVALID_DATA extent converts the written portion of the
subsequent reads MUST be performed from this extent; the INVALID_DATA extent to a READ_WRITE_DATA extent; all subsequent reads
corresponding portion of the READ_DATA extent MUST NOT be used after MUST be performed from this extent; the corresponding portion of the
storing data in an INVALID_DATA extent. READ_DATA extent MUST NOT be used after storing data in an
INVALID_DATA extent.
In the LAYOUTCOMMIT operation that normally sends updated layout In the LAYOUTCOMMIT operation that normally sends updated layout
information back to the server, for writable data, some INVALID_DATA information back to the server, for writable data, some INVALID_DATA
extents may be committed as READ_WRITE_DATA extents, signifying that extents may be committed as READ_WRITE_DATA extents, signifying that
the storage at the corresponding storage_offset values has been the storage at the corresponding storage_offset values has been
stored into and is now to be considered as valid data to be read. stored into and is now to be considered as valid data to be read.
READ_DATA extents need not be sent to the server. For extents that READ_DATA extents are not committed to the server. For extents that
the client receives via LAYOUTGET as INVALID_DATA and returns via the client receives via LAYOUTGET as INVALID_DATA and returns via
LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the
READ_DATA mapping for that extent is no longer valid or necessary for READ_DATA mapping for that extent is no longer valid or necessary for
that file. that file.
2.1.5. Extents are Permissions 2.2.5. Extents are Permissions
Layout extents returned to pNFS clients grant permission to read or Layout extents returned to pNFS clients grant permission to read or
write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as
zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, zeroes), READ_WRITE_DATA and INVALID_DATA are read/write,
(INVALID_DATA reads as zeros, any write converts it to (INVALID_DATA reads as zeros, any write converts it to
READ_WRITE_DATA). This is the only client means of obtaining READ_WRITE_DATA). This is the only client means of obtaining
permission to perform direct I/O to storage devices; a pNFS client permission to perform direct I/O to storage devices; a pNFS client
MUST NOT perform direct I/O operations that are not permitted by an MUST NOT perform direct I/O operations that are not permitted by an
extent held by the client. Client adherence to this rule places the extent held by the client. Client adherence to this rule places the
pNFS server in control of potentially conflicting storage device pNFS server in control of potentially conflicting storage device
skipping to change at page 12, line 23 skipping to change at page 11, line 21
the delegations directly to the storage devices that host the data. the delegations directly to the storage devices that host the data.
These devices have no knowledge of files, mandatory locks, or share These devices have no knowledge of files, mandatory locks, or share
reservations, and are not in a position to enforce such restrictions. reservations, and are not in a position to enforce such restrictions.
For this reason the NFSv4 server MUST NOT grant layout delegations For this reason the NFSv4 server MUST NOT grant layout delegations
that conflict with mandatory locks or share reservations. Further, that conflict with mandatory locks or share reservations. Further,
if a conflicting mandatory lock request or a conflicting open request if a conflicting mandatory lock request or a conflicting open request
arrives at the server, the server MUST recall the part of the layout arrives at the server, the server MUST recall the part of the layout
delegation in conflict with the request before processing the delegation in conflict with the request before processing the
request. request.
2.1.6. End-of-file Processing 2.2.6. End-of-file Processing
To avoid file-system corruption, close coordination between pNFS
clients and the server is required when an NFSv4 client changes the
end-of-file marker via the SETATTR call. Whenever the end-of-file is
set into the middle of a file-system block, the portion of the block
which comes after the end-of-file must be zeroed on disk. The pNFS
clients and server share the responsibility for this zeroing as
follows:
2.1.6.1. Server End-of-file Processing
When an NFSv4 client changes the end-of-file marker via a SETATTR
operation, the server MUST send a CB_SIZECHANGED notification to each
pNFS client (with the exception of the client which sent the SETATTR)
that holds a layout for the file. Clients process this callback as
described in Section 2.1.6.2.
The CB_SIZECHANGED notification has the effect of invalidating all
data beyond the new end-of-file. Once the server receives a
successful response to the CB_SIZECHANGED notification from a client,
it will consider any portion of any layout held by the client beyond
the new end-of-file to be invalid. This requires that:
o READ_WRITE_DATA extents are changed INVALID_DATA extents.
o READ_DATA extents are changed to NONE_DATA extents. If this
results in any portion of a NONE_DATA extent overlapping an
INVALID_DATA extent, that portion of the NONE_DATA extent (which
may be the entire extent) is immediately discarded as INVALID_DATA
extents are not permitted to overlap NONE_DATA extents.
If the new end-of-file is not set on a file-system block boundary, The end-of-file location can be changed in two ways: implicitly as
the server must ensure that zeroes are written to the partial block the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file,
from the new end-of-file to the end of the block containing it. If a or explicitly as the result of a SETATTR request. Typically, when a
pNFS client holds a writeable layout covering that block, that pNFS file is truncated by an NFSv4 client via the SETATTR call, the server
client is expected to perform this function, but the CB_SIZECHANGED frees any disk blocks belonging to the file which are beyond the new
callback response needs to have a way for the client to communicate end-of-file byte, and may write zeros to the portion of the new end-
that it has done so (see Section 3.2.2). If no client performs this of-file block beyond the new end-of-file byte. These actions render
function, the server must do the zeroing, including recalling all any pNFS layouts which refer to the blocks that are freed or written
layouts (readable and writable) for that block. semantically invalid. Therefore, the server MUST recall from clients
the portions of any pNFS layouts which refer to blocks that will be
freed or written by the server before processing the truncate
request. These recalls may take time to complete; as explained in
[NFSv4.1], if the server cannot respond to the client SETATTR request
in a reasonable amount of time, it SHOULD reply to the client with
the error NFS4ERR_DELAY.
2.1.6.2. Client End-of-file Processing Blocks in the INVALID_DATA state which lie beyond the new end-of-file
block present a special case. The server has reserved these blocks
for use by a pNFS client with a writable layout for the file, but the
client has yet to commit the blocks, and they are not yet a part of
the file mapping on disk. The server MAY free these blocks while
processing the SETATTR request. If so, the server MUST recall any
layouts from pNFS clients which refer to the blocks before processing
the truncate. If the server does not free the INVALID_DATA blocks
while processing the SETATTR request, it need not recall layouts
which refer only to the INVALID DATA blocks.
When a pNFS client receives the CB_SIZECHANGED notification from the When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond
server, or when a pNFS client issues a SETATTR operation to set the the current end-of-file, or extended explicitly by a SETATTR request,
end-of-file marker for a file on which it holds a layout, it should the server need not recall any portions of any pNFS layouts.
proceed the same way. First, it should consider any portions of any
readable layout beyond the new end-of-file to be in the NONE_DATA
state and it should immediately stop servicing read requests from
such extents. If the client holds a readable layout for the block
containing a new end-of-file that is not at a block boundary, it
SHOULD return at least that block before replying to the callback;
this avoids a callback from the server in order to zero the partial
block beyond the new end-of-file (how to do this is an open issue -
see Section 3.2.3). Next, the client should consider any portions of
a writeable layout beyond the new end-of-file to be in the
INVALID_DATA state, and should henceforth return zeroes in response
to any read requests for these extents. Data can, of course, be
rewritten to these extents, turning them back into READ_WRITE_DATA
extents, as desired. Finally, if the new end-of-file marker is not
on a file-system block boundary, and if the client has a writeable
layout which covers the block containing the new end-of-file, then
the client MUST zero-fill the portion of the block after the end-of-
file marker and write this block to the storage before responding to
the CB_SIZECHANGED notification. In this case the new end-of-file
block MUST be considered to be in the READ_WRITE_DATA state.
2.2. Volume Identification 2.3. Volume Identification
Storage Systems such as storage arrays can have multiple physical Storage Systems such as storage arrays can have multiple physical
network ports that need not be connected to a common network, network ports that need not be connected to a common network,
resulting in a pNFS client having simultaneous multipath access to resulting in a pNFS client having simultaneous multipath access to
the same storage volumes via different ports on different networks. the same storage volumes via different ports on different networks.
The networks may not even be the same technology - for example, The networks may not even be the same technology - for example,
access to the same volume via both iSCSI and Fibre Channel is access to the same volume via both iSCSI and Fibre Channel is
possible, hence network address are difficult to use for volume possible, hence network address are difficult to use for volume
identification. For this reason, this pNFS block layout identifies identification. For this reason, this pNFS block layout identifies
storage volumes by content, for example providing the means to match storage volumes by content, for example providing the means to match
skipping to change at page 14, line 32 skipping to change at page 12, line 36
offset4 sig_offset; /* byte offset of component */ offset4 sig_offset; /* byte offset of component */
length4 sig_length; /* byte length of component */ length4 sig_length; /* byte length of component */
opaque contents<>; /* contents of this component of the opaque contents<>; /* contents of this component of the
signature (this is opaque) */ signature (this is opaque) */
}; };
struct pnfs_block_deviceaddr4 { enum pnfs_block_volume_type {
sigComponent ds<MAX_SIG_COMP>; /* disk signature */ VOLUME_SIMPLE = 0, /* volume maps to a single LU */
VOLUME_SLICE = 1, /* volume is a slice of another volume */
VOLUME_CONCAT = 2, /* volume is a concatenation of multiple
volumes */
VOLUME_STRIPE = 3, /* volume is striped across multiple
volumes */
}; };
struct pnfs_block_slice_volume_info {
A volume is content-identified by a disk signature made up of extents offset4 start; /* block-offset of the start of the
within blocks and contents that must match. The slice */
"pnfs_block_deviceaddr4" structure is returned by the server as the
storage-protocol-specific opaque field in the "pnfs_deviceaddr4"
structure, in response to the GETDEVICEINFO or GETDEVICELIST
operations. Note that the opaque "contents" field in the
"sigComponent" structure MUST NOT be interpreted as a zero-terminated
string, as it may contain embedded zero-valued octets. It contains
exactly sig_length octets. There are no restrictions on alignment
(e.g., neither sig_offset nor sig_length are required to be multiples
of 4).
3. New Operations Issues length4 length; /* length of slice in blocks */
This section collects issues in the [NFVV4.1] draft encountered in pnfs_deviceid4 volume; /* volume which is sliced */
writing this block/volume layout draft.
3.1. Server Controlling Client Access to Block Devices };
Typically, SAN disk arrays and SAN protocols provide access control struct pnfs_block_concat_volume_info {
mechanisms (access-logics, lun masking, etc.) which operate at the
granularity of individual hosts. The functionality provided by such
mechanisms makes it possible for the server to "fence" individual
client machines from certain physical disks---that is to say, to
prevent individual client machines from reading or writing to certain
physical disks. Finer-grained access control methods are not
generally available. This affects the ability of the block/volume
storage protocol to meet the requirements set out in the [NFSV4.1]
draft in the following ways:
3.1.1. Guarantees Provided by Layouts pnfs_deviceid4 volumes<>; /* volumes which are concatenated */
See first paragraph of Security Considerations (Section 4). };
3.1.2. I/O In-flight Issues struct pnfs_block_stripe_volume_info {
See third paragraph of Security Considerations (Section 4). length4 stripe_unit; /* size of stripe */
3.1.3. Crash Recovery Issues pnfs_deviceid4 volumes<>; /* volumes which are striped
across*/
The main [NFSV4.1] draft does not currently include a "reclaim bit" };
in the LAYOUTGET operation. This means that after a server crash,
during the server's recovery grace period, a client cannot be
guaranteed that it will be able to reclaim a writable layout that it
held before the crash (due to possibly conflicting requests from
other clients), nor can the client be guaranteed that a writeable
layout that it reclaims will have the same on-disk layout as the one
it held prior to the crash. There is one possible scenario where
this lack of guarantees could lead to data corruption.
The problematic case is when the server crashes while the client union pnfs_block_deviceaddr4 switch (pnfs_block_volume_type type) {
holds a writable layout, but has yet to commit some allocated block
extents back to the server with a LAYOUTCOMMIT operation. If the
data that has been written to these extents is still cached by the
client, the client can simply re-write the data via NFSv4, once the
server has come back online. However, the uncommitted extents may
represent more data than can fit in the client's cache at one time.
In this case, the data cannot be rewritten to the server---indeed, it
cannot even be safely re-read from storage in the absence of a pNFS
layout which covers those disk blocks. The ideal solution, from the
client's perspective, would be a method that allows the client to
reliably reclaim the previously held layout from the server. For
this, the client would need a "reclaim bit" in the LAYOUTGET
operation to allow the server to recognize, and give priority to, a
reclaim request during the server's recovery grace period, as well as
a way of supplying an extent list indicating the blocks (including
storage location) that were previously held in the writable layout.
Without this support, the server would need to make sure that case VOLUME_SIMPLE:
provisionally allocated extents that are supplied in response to
writable LAYOUTGET commands are persistently stored and recoverable
in the face of server crashes. Otherwise, the client would have no
guarantee that the new writable layout it receives in response to a
reclaim request covers the same disk blocks as the old layout it held
before the server crash. The "reclaim bit" could be implemented as
another iomode (e.g., LAYOUTIOMODE_RW_RECOVER). If this is not done,
the result may be that the amount of written-but-uncomitted data held
by a client may need to be limited to the client's cache size,
resulting in less effective cache usage and more commit traffic.
3.2. End-of-File (EOF) Handling issues sigComponent ds<MAX_SIG_COMP>; /* disk signature */
The main pNFS draft [NFSV4.1] includes a callback (CB_SIZECHANGED) case VOLUME_SLICE:
for a pNFS server to communicate a file size change (new EOF) to pNFS
clients. The callback documentation indicates that "the client
should update its internal size for the file". This is insufficient
in a number of ways described below. Some of these issues are unique
to the block/ volume layout as this layout requires that either the
client or server zero beyond the new EOF to the end of the block that
contains the new EOF (both the file and object layouts can make this
the responsibility of the storage server(s)).
3.2.1. Truncation pnfs_block_slice_volume_info slice_info;
There is a race between a pNFS client extending a file via direct case VOLUME_CONCAT:
pNFS writes and an ordinary NFS client truncating the file via a
SETATTR - the order of these operations may be visible to the
application(s) using NFS, and hence it is necessary that they be done
in the right order. List discussion suggests that the absence of
operation sequencing in pNFS (in contrast to HighRoad) results in the
best approach being to require a recall of the portions of the layout
that are invalidated by a truncation so that the server can perform
the truncation (this MUST include the entire layout beyond the new
EOF). For the block/volume layout, the server is responsible for
zeroing the partial block beyond the new EOF in this case.
3.2.2. Extension pnfs_block_concat_volume_info concat_info;
When extending a file via SETATTR, a similar race can arise between case VOLUME_STRIPE:
that SETATTR and direct pNFS writes. When CB_SIZECHANGED is used to
pass the new end-of-file to a client that holds a writable layout for
the block containing the new end-of-file, and that new end-of-file is
not at a filesystem block boundary, it is desirable for the client
holding the writable layout to take responsibility for zeroing the
partial block beyond end-of file. Note that the client may not be
able to do so in all cases, as the client may have been in the
process of returning that portion of the layout when the callback
arrives, or the server and client may not agree on whether the client
holds a writable layout containing that block. Hence the
CB_SIZECHANGED reply needs to carry an indication from the client as
to whether the client has zeroed that partial block.
Open Issue: How to convey that indication. The least intrusive way pnfs_block_stripe_volume_info stripe_info;
to do this may be to define an NFS Error that indicates that the
callback was successful and the client zeroed the resulting partial
block.
Open Issue: Can callback reply be delayed until this write is done? default:
Immediate callback reply with promise to zero the partial block
followed by client crash results in potential data corruption due to
failure to zero the partial block (junk appears if the file is
extended by moving end-of-file forward).
3.2.3. Extension vs. Readable Layouts void;
There is an analogous issue for a client holding a readable layout };
(READ_DATA) including the block that contains a new non-block-aligned
end-of-file. In this case, the client cannot zero the partial block
itself, as it does not hold a writable layout, but if it retains the
readable layout, the server is likely to immediately issue a recall
callback to revoke the readable layout before the server writes the
zeroes. This second callback should be avoided.
Issue: How? Allowing an arbitrary return in the response to The "pnfs_block_deviceaddr4" union is a recursive structure that
CB_SIZECHANGED would get the job done, but is only needed by the allows arbitrarily complex nested volume structures to be encoded.
block/volume layout, and complicates the end-of-file change code on The types of aggregations that are allowed are stripes,
the server by potentially returning unrelated areas of the layout. concatenations, and slices. The base case is a volume which maps
Delaying the callback reply to allow the client to do a RETURN helps simply to one logical unit in the SAN, identified by the
some (2 round-trips instead of 3 if the recall were used) at the cost "sigComponent" structure. Each SAN logical unit is content-
of delaying the recall reply. Best bet may be to enlarge the identified by a disk signature made up of extents within blocks and
CB_SIZECHANGED callback so the server can tell the client what needs contents that must match. The "pnfs_block_deviceaddr4" union is
to be returned (in the readable layout case, the server asks for one returned by the server as the storage-protocol-specific opaque field
block) and for the client to accept/reject that (client accepts if it in the "pnfs_deviceaddr4" structure, in response to the GETDEVICEINFO
can do so immediately - it would reject if it needed to do a commit); or GETDEVICELIST operations. Note that the opaque "contents" field
that should yield one round-trip with the server getting exactly what in the "sigComponent" structure MUST NOT be interpreted as a zero-
it needs back (and server MUST NOT use this to recall a writable terminated string, as it may contain embedded zero-valued octets. It
layout). contains exactly sig_length octets. There are no restrictions on
alignment (e.g., neither sig_offset nor sig_length are required to be
multiples of 4).
4. Security Considerations 2.4. Crash Recovery Issues
Certain security responsibilities are delegated to pNFS clients. When the server crashes while the client holds a writable layout, and
Block/volume storage systems generally control access at a volume the client has written data to blocks covered by the layout, and the
granularity, and hence pNFS clients have to be trusted to only blocks are still in the INVALID_DATA state, the client has two
options for recovery. If the data that has been written to these
blocks is still cached by the client, the client can simply re-write
the data via NFSv4, once the server has come back online. However,
if the data is no longer in the client's cache, the client MUST NOT
attempt to source the data from the data servers. Instead, it should
attempt to commit the blocks in question to the server during the
server's recovery grace period, by sending a LAYOUTCOMMIT with the
"reclaim" flag set to true. This process is described in detail in
[NFSv4.1] section 21.42.4.
3. Security Considerations
Typically, SAN disk arrays and SAN protocols provide access control
mechanisms (access-logics, lun masking, etc.) which operate at the
granularity of individual hosts. The functionality provided by such
mechanisms makes it possible for the server to "fence" individual
client machines from certain physical disks---that is to say, to
prevent individual client machines from reading or writing to certain
physical disks. Finer-grained access control methods are not
generally available. For this reason, certain security
responsibilities are delegated to pNFS clients for block/volume
layouts. Block/volume storage systems generally control access at a
volume granularity, and hence pNFS clients have to be trusted to only
perform accesses allowed by the layout extents they currently hold perform accesses allowed by the layout extents they currently hold
(e.g., and not access storage for files on which a layout extent is (e.g., and not access storage for files on which a layout extent is
not held). In general, the server will not be able to prevent a not held). In general, the server will not be able to prevent a
client which holds a layout for a file from accessing parts of the client which holds a layout for a file from accessing parts of the
physical disk not covered by the layout. Similarly, the server will physical disk not covered by the layout. Similarly, the server will
not be able to prevent a client from accessing blocks covered by a not be able to prevent a client from accessing blocks covered by a
layout that it has already returned. This block-based level of layout that it has already returned. This block-based level of
protection must be provided by the client software. In environments protection must be provided by the client software.
where the security requirements are such that client-side protection
from access to storage outside of the layout is not sufficient, pNFS An alternative method of block/volume protocol use is for the storage
block/volume storage layouts for pNFS SHOULD NOT be used. devices to export virtualized block addresses, which do reflect the
files to which blocks belong. These virtual block addresses are
exported to pNFS clients via layouts. This allows the storage device
to make appropriate access checks, while mapping virtual block
addresses to physical block addresses. In environments where the
security requirements are such that client-side protection from
access to storage outside of the layout is not sufficient pNFS
block/volume storage layouts for pNFS SHOULD NOT be used, unless the
storage device is able to implement the appropriate access checks,
via use of virtualized block addresses, or other means.
This also has implications for some NFSv4 functionality outside pNFS. This also has implications for some NFSv4 functionality outside pNFS.
For instance, if a file is covered by a mandatory read-only lock, the For instance, if a file is covered by a mandatory read-only lock, the
server can ensure that only readable layouts for the file are granted server can ensure that only readable layouts for the file are granted
to pNFS clients. However, it is up to each pNFS client to ensure to pNFS clients. However, it is up to each pNFS client to ensure
that the readable layout is used only to service read requests, and that the readable layout is used only to service read requests, and
not to allow writes to the existing parts of the file. Since not to allow writes to the existing parts of the file. Since
block/volume storage systems are generally not capable of enforcing block/volume storage systems are generally not capable of enforcing
such file-based security, in environments where pNFS clients cannot such file-based security, in environments where pNFS clients cannot
be trusted to enforce such policies, pNFS block/volume storage be trusted to enforce such policies, pNFS block/volume storage
layouts SHOULD NOT be used. layouts SHOULD NOT be used.
When a layout lease timer expires, it is important for the sake of
correctness that any in-flight I/Os that the client issued before the
expiration of the timer are rejected at the storage. For the
block/volume protocol, this is possible by fencing a client with an
expired layout timer from the physical storage. Note, however, that
the granularity of this operation can only be at the host/logical-
unit level. Thus, if a client's lease timer expires for a single
layout, it will effectively render useless *all* of the clients
layouts for files in the containing filesystem.
Access to block/volume storage is logically at a lower layer of the Access to block/volume storage is logically at a lower layer of the
I/O stack than NFSv4, and hence NFSv4 security is not directly I/O stack than NFSv4, and hence NFSv4 security is not directly
applicable to protocols that access such storage directly. Depending applicable to protocols that access such storage directly. Depending
on the protocol, some of the security mechanisms provided by NFSv4 on the protocol, some of the security mechanisms provided by NFSv4
(e.g., encryption, cryptographic integrity) may not be available, or (e.g., encryption, cryptographic integrity) may not be available, or
may be provided via different means. At one extreme, pNFS with may be provided via different means. At one extreme, pNFS with
block/volume storage can be used with storage access protocols (e.g., block/volume storage can be used with storage access protocols (e.g.,
parallel SCSI) that provide essentially no security functionality. parallel SCSI) that provide essentially no security functionality.
At the other extreme, pNFS may be used with storage protocols such as At the other extreme, pNFS may be used with storage protocols such as
iSCSI that provide significant functionality. It is the iSCSI that provide significant functionality. It is the
skipping to change at page 19, line 28 skipping to change at page 16, line 23
a different granularity and with a different notion of identity than a different granularity and with a different notion of identity than
NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls
initiator access to volumes). The responsibility for enforcing initiator access to volumes). The responsibility for enforcing
appropriate correspondences between these security layers is placed appropriate correspondences between these security layers is placed
upon the pNFS client. As with the issues in the first paragraph of upon the pNFS client. As with the issues in the first paragraph of
this section, in environments where the security requirements are this section, in environments where the security requirements are
such that client-side protection from access to storage outside of such that client-side protection from access to storage outside of
the layout is not sufficient, pNFS block/volume storage layouts the layout is not sufficient, pNFS block/volume storage layouts
SHOULD NOT be used. SHOULD NOT be used.
5. Conclusions 4. Conclusions
This draft specifies the block/volume layout type for pNFS and This draft specifies the block/volume layout type for pNFS and
associated functionality. associated functionality.
6. IANA Considerations 5. IANA Considerations
There are no IANA considerations in this document. All pNFS IANA There are no IANA considerations in this document. All pNFS IANA
Considerations are covered in [NFSV4.1]. Considerations are covered in [NFSV4.1].
7. Revision History 6. Revision History
-00: Initial Version as draft-black-pnfs-block-00 -00: Initial Version as draft-black-pnfs-block-00
-01: Rework discussion of extents as locks to talk about extents -01: Rework discussion of extents as locks to talk about extents
granting access permissions. Rewrite operation ordering section to granting access permissions. Rewrite operation ordering section to
discuss deadlocks and races that can cause problems. Add new section discuss deadlocks and races that can cause problems. Add new section
on recall completion. Add client copy-on-write based on text from on recall completion. Add client copy-on-write based on text from
Craig Everhart. Craig Everhart.
-02: Fix glitches in extent state descriptions. Describe most issues -02: Fix glitches in extent state descriptions. Describe most issues
skipping to change at page 20, line 19 skipping to change at page 17, line 14
-00: New version as draft-ietf-nfsv4-pnfs-block. Removed resolved -00: New version as draft-ietf-nfsv4-pnfs-block. Removed resolved
operations issues (Section 3). Align types with main pNFS draft operations issues (Section 3). Align types with main pNFS draft
(which is now part of the NFSv4.1 minor version draft), add volume (which is now part of the NFSv4.1 minor version draft), add volume
striping and slicing support. New operations issues are in Section 3 striping and slicing support. New operations issues are in Section 3
- the need for a "reclaim bit" and EOF concerns are the two major - the need for a "reclaim bit" and EOF concerns are the two major
issues. Extended and improved the Security Considerations section, issues. Extended and improved the Security Considerations section,
but it still needs work. Added 1-sentence conclusion that also still but it still needs work. Added 1-sentence conclusion that also still
needs work. needs work.
8. Acknowledgments -01: Changed definition of pnfs_block_deviceaddr4 union to allow more
concise representation of aggregated volume structures. Fixed typos
to make both pnfs_block_layoutupdate and pnfs_block_layoutreturn
structures contain extent lists instead of a single extent. Updated
section 2.1.6 to remove references to CB_SIZECHANGED. Moved
description of recovery from "Issues" section to "Block Layout
Description" section. Removed section 3.2 "End-of-file handling
issues". Merged old "block/volume layout security considerations"
section from previous version of [NFSv4.1] with section 4. Moved
paragraph on lingering writes to the section which describes layout
return. Removed Issues section (3) as the remaining issues are all
resolved.
7. Acknowledgments
This draft draws extensively on the authors' familiarity with the This draft draws extensively on the authors' familiarity with the
mapping functionality and protocol in EMC's HighRoad system mapping functionality and protocol in EMC's HighRoad system
[HighRoad]. The protocol used by HighRoad is called FMP (File [HighRoad]. The protocol used by HighRoad is called FMP (File
Mapping Protocol); it is an add-on protocol that runs in parallel Mapping Protocol); it is an add-on protocol that runs in parallel
with filesystem protocols such as NFSv3 to provide pNFS-like with filesystem protocols such as NFSv3 to provide pNFS-like
functionality for block/volume storage. While drawing on HighRoad functionality for block/volume storage. While drawing on HighRoad
FMP, the data structures and functional considerations in this draft FMP, the data structures and functional considerations in this draft
differ in significant ways, based on lessons learned and the differ in significant ways, based on lessons learned and the
opportunity to take advantage of NFSv4 features such as COMPOUND opportunity to take advantage of NFSv4 features such as COMPOUND
operations. The design to support pNFS client participation in copy- operations. The design to support pNFS client participation in copy-
on-write is based on text and ideas contributed by Craig Everhart on-write is based on text and ideas contributed by Craig Everhart
(formerly with IBM). (formerly with IBM).
9. References 8. References
9.1. Normative References 8.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[NFSV4.1] Shepler, S., ed., "NFSv4 Minor Version 1", draft-ietf- [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor
nfsv4-minorversion1-01.txt, Work in Progress, December Version 1", draft-ietf-nfsv4-minorversion1-06.txt, Internet
2005. Draft, August 2006.
9.2. Informative References 8.2. Informative References
[HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white
paper, available at: paper, available at:
http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf
link checked 30 December 2005. link checked 29 August 2006.
Author's Addresses Author's Addresses
David L. Black David L. Black
EMC Corporation EMC Corporation
176 South Street 176 South Street
Hopkinton, MA 01748 Hopkinton, MA 01748
Phone: +1 (508) 293-7953 Phone: +1 (508) 293-7953
Email: black_david@emc.com Email: black_david@emc.com
skipping to change at page 22, line 17 skipping to change at page 19, line 20
This document and the information contained herein are provided on an This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Copyright Statement Copyright Statement
Copyright (C) The Internet Society (2005). Copyright (C) The Internet Society (2006).
This document is subject to the rights, licenses and restrictions This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors contained in BCP 78, and except as set forth therein, the authors
retain all their rights. retain all their rights.
Acknowledgment Acknowledgment
Funding for the RFC Editor function is currently provided by the Funding for the RFC Editor function is currently provided by the
Internet Society. Internet Society.
 End of changes. 70 change blocks. 
383 lines changed or deleted 247 lines changed or added

This html diff was produced by rfcdiff 1.32. The latest version is available from http://www.levkowetz.com/ietf/tools/rfcdiff/