draft-ietf-nfsv4-pnfs-block-02.txt   draft-ietf-nfsv4-pnfs-block-03.txt 
NFSv4 Working Group David L. Black NFSv4 Working Group David L. Black
Internet Draft Stephen Fridella Internet Draft Stephen Fridella
Expires: August 2007 Jason Glasgow Expires: September 2007 Jason Glasgow
Intended Status: Proposed Standard EMC Corporation Intended Status: Proposed Standard EMC Corporation
February 21, 2007 March 4, 2007
pNFS Block/Volume Layout pNFS Block/Volume Layout
draft-ietf-nfsv4-pnfs-block-02.txt draft-ietf-nfsv4-pnfs-block-03.txt
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that By submitting this Internet-Draft, each author represents that
any applicable patent or other IPR claims of which he or she is any applicable patent or other IPR claims of which he or she is
aware have been or will be disclosed, and any of which he or she aware have been or will be disclosed, and any of which he or she
becomes aware will be disclosed, in accordance with Section 6 of becomes aware will be disclosed, in accordance with Section 6 of
BCP 79. BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
skipping to change at page 1, line 35 skipping to change at page 1, line 35
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire in August 2007. This Internet-Draft will expire in September 2007.
Abstract Abstract
Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access
file data on the storage used by the NFSv4 server. This ability to file data on the storage used by the NFSv4 server. This ability to
bypass the server for data access can increase both performance and bypass the server for data access can increase both performance and
parallelism, but requires additional client functionality for data parallelism, but requires additional client functionality for data
access, some of which is dependent on the class of storage used. The access, some of which is dependent on the class of storage used. The
main pNFS operations draft specifies storage-class-independent main pNFS operations draft specifies storage-class-independent
extensions to NFS; this draft specifies the additional extensions extensions to NFS; this draft specifies the additional extensions
skipping to change at page 2, line 18 skipping to change at page 2, line 18
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC-2119 [RFC2119]. document are to be interpreted as described in RFC-2119 [RFC2119].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Block Layout Description.......................................3 2. Block Layout Description.......................................3
2.1. Background and Architecture...............................3 2.1. Background and Architecture...............................3
2.2. Data Structures: Extents and Extent Lists.................4 2.2. GETDEVICELIST and GETDEVICEINFO...........................4
2.2.1. Layout Requests and Extent Lists.....................6 2.2.1. Volume Identification................................4
2.2.2. Layout Commits.......................................7 2.2.2. Volume Topology......................................5
2.2.3. Layout Returns.......................................8 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4............8
2.2.4. Client Copy-on-Write Processing......................9 2.3. Data Structures: Extents and Extent Lists.................8
2.2.5. Extents are Permissions.............................10 2.3.1. Layout Requests and Extent Lists....................10
2.2.6. End-of-file Processing..............................11 2.3.2. Layout Commits......................................11
2.3. Volume Identification....................................12 2.3.3. Layout Returns......................................12
2.4. Crash Recovery Issues....................................15 2.3.4. Client Copy-on-Write Processing.....................13
3. Security Considerations.......................................15 2.3.5. Extents are Permissions.............................14
4. Conclusions...................................................16 2.3.6. End-of-file Processing..............................15
5. IANA Considerations...........................................17 2.4. Crash Recovery Issues....................................16
6. Revision History..............................................17 3. Security Considerations.......................................16
7. Acknowledgments...............................................18 4. Conclusions...................................................18
8. References....................................................18 5. IANA Considerations...........................................18
8.1. Normative References.....................................18 6. Revision History..............................................18
8.2. Informative References...................................18 7. Acknowledgments...............................................19
Author's Addresses...............................................19 8. References....................................................19
Intellectual Property Statement..................................19 8.1. Normative References.....................................19
Disclaimer of Validity...........................................20 8.2. Informative References...................................20
Copyright Statement..............................................20 Author's Addresses...............................................20
Acknowledgment...................................................20 Intellectual Property Statement..................................20
Disclaimer of Validity...........................................21
Copyright Statement..............................................21
Acknowledgment...................................................21
1. Introduction 1. Introduction
Figure 1 shows the overall architecture of a pNFS system: Figure 1 shows the overall architecture of a pNFS system:
+-----------+ +-----------+
|+-----------+ +-----------+ |+-----------+ +-----------+
||+-----------+ | | ||+-----------+ | |
||| | NFSv4 + pNFS | | ||| | NFSv4 + pNFS | |
+|| Clients |<------------------------------>| Server | +|| Clients |<------------------------------>| Server |
skipping to change at page 4, line 17 skipping to change at page 4, line 17
A pNFS layout for this block/volume class of storage is responsible A pNFS layout for this block/volume class of storage is responsible
for mapping from an NFS file (or portion of a file) to the blocks of for mapping from an NFS file (or portion of a file) to the blocks of
storage volumes that contain the file. The blocks are expressed as storage volumes that contain the file. The blocks are expressed as
extents with 64 bit offsets and lengths using the existing NFSv4 extents with 64 bit offsets and lengths using the existing NFSv4
offset4 and length4 types. Clients must be able to perform I/O to offset4 and length4 types. Clients must be able to perform I/O to
the block extents without affecting additional areas of storage the block extents without affecting additional areas of storage
(especially important for writes), therefore extents MUST be aligned (especially important for writes), therefore extents MUST be aligned
to 512-byte boundaries, and SHOULD be aligned to the block size used to 512-byte boundaries, and SHOULD be aligned to the block size used
by the NFSv4 server in managing the actual filesystem (4 kilobytes by the NFSv4 server in managing the actual filesystem (4 kilobytes
and 8 kilobytes are common block sizes). This block size is and 8 kilobytes are common block sizes). This block size is
available as an NFSv4 attribute - see Section 11.4 of [NFSV4.1]. available as the NFSv4.1 layout_blocksize attribute. [draft-ietf-
nfsv4_minorversion1-08]
The pNFS operation for requesting a layout (LAYOUTGET) includes the The pNFS operation for requesting a layout (LAYOUTGET) includes the
"pnfs_layoutiomode4 iomode" argument which indicates whether the "pnfs_layoutiomode4 iomode" argument which indicates whether the
requested layout is for read-only use or read-write use. A read-only requested layout is for read-only use or read-write use. A read-only
layout may contain holes that are read as zero, whereas a read-write layout may contain holes that are read as zero, whereas a read-write
layout will contain allocated, but uninitialized storage in those layout will contain allocated, but un-initialized storage in those
holes (read as zero, can be written by client). This draft also holes (read as zero, can be written by client). This draft also
supports client participation in copy on write by providing both supports client participation in copy on write by providing both
read-only and uninitialized storage for the same range in a layout. read-only and un-initialized storage for the same range in a layout.
Reads are initially performed on the read-only storage, with writes Reads are initially performed on the read-only storage, with writes
going to the uninitialized storage. After the first write that going to the un-initialized storage. After the first write that
initializes the uninitialized storage, all reads are performed to initializes the un-initialized storage, all reads are performed to
that now-initialized writeable storage, and the corresponding read- that now-initialized writeable storage, and the corresponding read-
only storage is no longer used. only storage is no longer used.
2.2. Data Structures: Extents and Extent Lists 2.2. GETDEVICELIST and GETDEVICEINFO
2.2.1. Volume Identification
Storage Systems such as storage arrays can have multiple physical
network ports that need not be connected to a common network,
resulting in a pNFS client having simultaneous multipath access to
the same storage volumes via different ports on different networks.
The networks may not even be the same technology - for example,
access to the same volume via both iSCSI and Fibre Channel is
possible, hence network address are difficult to use for volume
identification. For this reason, this pNFS block layout identifies
storage volumes by content, for example providing the means to match
(unique portions of) labels used by volume managers. Any block pNFS
system using this layout MUST support a means of content-based unique
volume identification that can be employed via the data structure
given here.
struct pnfs_block_sig_component4 { /* disk signature component */
int64_t sig_offset; /* byte offset of component
from start of volume if positive
from end of volume if negative*/
length4 sig_length; /* byte length of component */
opaque contents<>; /* contents of this component of the
signature (this is opaque) */
};
. Note that the opaque "contents" field in the
"pnfs_block_sig_component4" structure MUST NOT be interpreted as a
zero-terminated string, as it may contain embedded zero-valued
octets. It contains exactly sig_length octets. There are no
restrictions on alignment (e.g., neither sig_offset nor sig_length
are required to be multiples of 4). The sig_offset is a signed
quantity which when positive represents an offset from the start of
the volume, and when negative represents an offset from the end of
the volume.
Negative offsets are permitted in order to simplify the client
implementation on systems where the device label is found at a fixed
offset from the end of the volume. If the server uses negative
offsets to describe the signature, then the client and server MUST
NOT see different volume sizes. Negative offsets SHOULD NOT be used
in systems that dynamically resize volumes unless care is taken to
ensure that the device label is always present at the offset from the
end of the volume as seen by the clients.
In the absence of a negative offset, imagine a system where the
client has access to n volumes and a file system is striped across m
volumes. If those m disks are all different sizes, then in the worst
case, the client would need to read n times m blocks in order to
properly identify the volumes used by a layout.
The pNFS client block layout driver uses this volume identification
to map pnfs_block_volume_type4 VOLUME_SIMPLE deviceid4s to it's local
view of a LUN.
2.2.2. Volume Topology
The pNFS block server volume topology is expressed as an arbitrary
combination of base volume types enumerated in the following data
structures.
enum pnfs_block_volume_type4 {
VOLUME_SIMPLE = 0, /* volume maps to a single LU */
VOLUME_SLICE = 1, /* volume is a slice of another volume */
VOLUME_CONCAT = 2, /* volume is a concatenation of multiple
volumes */
VOLUME_STRIPE = 3 /* volume is striped across multiple
volumes */
};
struct pnfs_block_simple_volume_info4 {
deviceid4 id; /* this volume id */
pnfs_block_sig_component4 ds<MAX_SIG_COMP>;
/* disk signature */
};
struct pnfs_block_slice_volume_info4 {
deviceid4 id; /* this volume id */
offset4 start; /* block-offset of the start of the
slice */
length4 length; /* length of slice in blocks */
deviceid4 volume; /* volume which is sliced */
};
struct pnfs_block_concat_volume_info4 {
deviceid4 id;
/* this volume id */
deviceid4 volumes<>; /* volumes which are concatenated */
};
struct pnfs_block_stripe_volume_info4 {
deviceid4 id;
/* this volume id */
length4 stripe_unit; /* size of stripe */
deviceid4 volumes<>; /* volumes which are striped
across*/
};
union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) {
case VOLUME_SIMPLE:
pnfs_block_simple_info4 simple_info;
case VOLUME_SLICE:
pnfs_block_slice_volume_info4 slice_info;
case VOLUME_CONCAT:
pnfs_block_concat_volume_info4 concat_info;
case VOLUME_STRIPE:
pnfs_block_stripe_volume_info4 stripe_info;
default:
void;
};
struct pnfs_block_deviceaddr4 {
deviceid4 root_id; /* id of the root volume of the
hierarchy */
pnfs_block_volume4 volumes<>; /* array of volumes */
};
The "pnfs_block_deviceaddr4" data structure is a structure that
allows arbitrarily complex nested volume structures to be encoded.
The types of aggregations that are allowed are stripes,
concatenations, and slices. Note that the volume topology expressed
in the pnfs_block_devidceaddr4 data structure will always resolve to
a set of pnfs_block_volume_type4 VOLUME_SIMPLE. The array of volumes
is ordered such that the root volume is the last element of the
array. Concat, slice and stripe volumes MUST refer to volumes
defined by lower indexed elements of the array.
The "pnfs_block_deviceaddr4" data structure is returned by the server
as the storage-protocol-specific opaque field in the "devlist_item4"
structure returned by a successful GETDEVICELIST operation, and in
the only field returned by a successful GETDEVICEINFO operation.
[draft-ietf-nfsv4-minorversion1-08].
2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4
The deviceid4 returned in the devlist_item4 of a successful
GETDEVICELIST operation is a shorthand id used to reference the whole
volume topology. Decoding the "pnfs_block_deviceaddr4" results in a
flat ordering of 512 byte data blocks mapped to VOLUME_SIMPLE
deviceid4s. Combined with the deviceid4 mapping to a client LUN
described in 2.2.1 Volume Identification, a logical volume offset
can be mapped to a 512 block on a pNFS client LUN. [draft-ietf-nfsv4-
minorversion1-08]
2.3. Data Structures: Extents and Extent Lists
A pNFS block layout is a list of extents within a flat array of 512- A pNFS block layout is a list of extents within a flat array of 512-
byte data blocks in a storage volume. The details of the volume byte data blocks in a logical volume. The details of the volume
topology can be determined by using the GETDEVICEINFO or topology can be determined by using the GETDEVICEINFO or
GETDEVICELIST operation (see discussion of volume identification, GETDEVICELIST operation (see discussion of volume identification,
section 2.3 below). The block layout describes the individual block section 2.2 above). The block layout describes the individual block
extents on the volume that make up the file. extents on the volume that make up the file.
enum pnfs_block_extent_state4 { enum pnfs_block_extent_state4 {
READ_WRITE_DATA = 0, /* the data located by this extent is valid READ_WRITE_DATA = 0, /* the data located by this extent is valid
for reading and writing. */ for reading and writing. */
READ_DATA = 1, /* the data located by this extent is valid READ_DATA = 1, /* the data located by this extent is valid
for reading only; it may not be written. for reading only; it may not be written.
*/ */
skipping to change at page 5, line 27 skipping to change at page 9, line 27
volume. */ volume. */
NONE_DATA = 3, /* the location is invalid. It is a hole in NONE_DATA = 3, /* the location is invalid. It is a hole in
the file. There is no physical space on the file. There is no physical space on
the volume. */ the volume. */
}; };
struct pnfs_block_extent4 { struct pnfs_block_extent4 {
offset4 offset; /* the starting offset in the offset4 file_offset; /* the starting offset in the
file */ file */
length4 length; /* the size of the extent */ length4 extent_length;
/* the size of the extent */
offset4 storage_offset; /* the starting offset in the offset4 storage_offset; /* the starting offset in the
volume */ volume */
pnfs_block_extent_state4 es; /* the state of this extent */ pnfs_block_extent_state4 es; /* the state of this extent */
}; };
struct pnfs_block_layout4 { struct pnfs_block_layout4 {
deviceid4 volume; /* logical volume on which file deviceid4 volume; /* logical volume on which file
is stored. */ is stored. */
pnfs_block_extent4 extents<>; /* extents which make up this pnfs_block_extent4 extents<>; /* extents which make up this
layout. */ layout. */
}; };
The block layout consists of an identifier of the logical volume on The block layout consists of a deviceid4, shorthand for the whole
which the file is stored, followed by a list of extents which map the topology of the logical volume on which the file is stored, followed
logical regions of the file to physical locations on the volume. The by a list of extents which map the logical regions of the file to
"storage_offset" field within each extent identifies a location on physical locations on the volume. The "storage offset" field within
the logical volume described by the "volume" field in the layout. each extent identifies a location on the logical volume described by
The client is responsible for translating this logical offset into an the "volume" field in the layout. The client is responsible for
offset on the appropriate underlying SAN logical unit. translating this logical offset into an offset on the appropriate
underlying SAN logical unit.
Each extent maps a logical region of the file onto a portion of the Each extent maps a logical region of the file onto a portion of the
specified logical volume. The file_offset, extent_length, and es specified logical volume. The file_offset, extent_length, and es
fields for an extent returned from the server are always valid. The fields for an extent returned from the server are always valid. The
interpretation of the storage_offset field depends on the value of es interpretation of the storage_offset field depends on the value of es
as follows (in increasing order): as follows (in increasing order):
o READ_WRITE_DATA means that storage_offset is valid, and points to o READ_WRITE_DATA means that storage_offset is valid, and points to
valid/initialized data that can be read and written. valid/initialized data that can be read and written.
o READ_DATA means that storage_offset is valid and points to valid/ o READ_DATA means that storage_offset is valid and points to valid/
initialized data which can only be read. Write operations are initialized data which can only be read. Write operations are
prohibited; the client may need to request a read-write layout. prohibited; the client may need to request a read-write layout.
o INVALID_DATA means that storage_offset is valid, but points to o INVALID_DATA means that storage_offset is valid, but points to
invalid uninitialized data. This data must not be physically read invalid un-initialized data. This data must not be physically read
from the disk until it has been initialized. A read request for from the disk until it has been initialized. A read request for
an INVALID_DATA extent must fill the user buffer with zeros. Write an INVALID_DATA extent must fill the user buffer with zeros. Write
requests must write whole server-sized blocks to the disk; bytes requests must write whole server-sized blocks to the disk; bytes
not initialized by the user must be set to zero. Any write to not initialized by the user must be set to zero. Any write to
storage in an INVALID_DATA extent changes the written portion of storage in an INVALID_DATA extent changes the written portion of
the extent to READ_WRITE_DATA; the pNFS client is responsible for the extent to READ_WRITE_DATA; the pNFS client is responsible for
reporting this change via LAYOUTCOMMIT. reporting this change via LAYOUTCOMMIT.
o NONE_DATA means that storage_offset is not valid, and this extent o NONE_DATA means that storage_offset is not valid, and this extent
may not be used to satisfy write requests. Read requests may be may not be used to satisfy write requests. Read requests may be
satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents
may be returned by requests for readable extents; they are never may be returned by requests for readable extents; they are never
returned if the request was for a writeable extent. returned if the request was for a writeable extent.
An extent list lists all relevant extents in increasing order of the An extent list lists all relevant extents in increasing order of the
file_offset of each extent; any ties are broken by increasing order file_offset of each extent; any ties are broken by increasing order
of the extent state (es). of the extent state (es).
2.2.1. Layout Requests and Extent Lists 2.3.1. Layout Requests and Extent Lists
Each request for a layout specifies at least three parameters: Each request for a layout specifies at least three parameters: file
offset, desired size, and minimum size. If the status of a request offset, desired size, and minimum size. If the status of a request
indicates success, the extent list returned must meet the following indicates success, the extent list returned must meet the following
criteria: criteria:
o A request for a readable (but not writeable) layout returns only o A request for a readable (but not writeable) layout returns only
READ_DATA or NONE_DATA extents (but not INVALID_DATA or READ_DATA or NONE_DATA extents (but not INVALID_DATA or
READ_WRITE_DATA extents). READ_WRITE_DATA extents).
o A request for a writeable layout returns READ_WRITE_DATA or o A request for a writeable layout returns READ_WRITE_DATA or
INVALID_DATA extents (but not NONE_DATA extents). It may also INVALID_DATA extents (but not NONE_DATA extents). It may also
skipping to change at page 7, line 32 skipping to change at page 11, line 34
read-only layout. For a read-write layout, the set of writable read-only layout. For a read-write layout, the set of writable
extents (i.e., excluding READ_DATA extents) MUST be logically extents (i.e., excluding READ_DATA extents) MUST be logically
contiguous. Every READ_DATA extent in a read-write layout MUST be contiguous. Every READ_DATA extent in a read-write layout MUST be
covered by an INVALID_DATA extent. This overlap of READ_DATA and covered by an INVALID_DATA extent. This overlap of READ_DATA and
INVALID_DATA extents is the only permitted extent overlap. INVALID_DATA extents is the only permitted extent overlap.
o Extents MUST be ordered in the list by starting offset, with o Extents MUST be ordered in the list by starting offset, with
READ_DATA extents preceding INVALID_DATA extents in the case of READ_DATA extents preceding INVALID_DATA extents in the case of
equal file_offsets. equal file_offsets.
2.2.2. Layout Commits 2.3.2. Layout Commits
struct pnfs_block_layoutupdate4 { struct pnfs_block_layoutupdate4 {
pnfs_block_extent4 commit_list<>;/* list of extents to which now pnfs_block_extent4 commit_list<>;/* list of extents to which now
contain valid data. */ contain valid data. */
bool make_version; /* client requests server to bool make_version; /* client requests server to
create copy-on-write image of create copy-on-write image of
this file. */ this file. */
skipping to change at page 8, line 26 skipping to change at page 12, line 29
The "make_version" field of the structure is a flag that the client The "make_version" field of the structure is a flag that the client
may set to request that the server create a copy-on-write image of may set to request that the server create a copy-on-write image of
the file (pNFS clients may be involved in this operation - see the file (pNFS clients may be involved in this operation - see
section 2.2.4, below). In anticipation of this operation the client section 2.2.4, below). In anticipation of this operation the client
which sets the "make_version" flag in the LAYOUTCOMMIT operation which sets the "make_version" flag in the LAYOUTCOMMIT operation
should immediately mark all extents in the layout that is possesses should immediately mark all extents in the layout that is possesses
as state READ_DATA. Future writes to the file require a new as state READ_DATA. Future writes to the file require a new
LAYOUTGET operation to the server with an "iomode" set to LAYOUTGET operation to the server with an "iomode" set to
LAYOUTIOMODE_RW. LAYOUTIOMODE_RW.
2.2.3. Layout Returns 2.3.3. Layout Returns
struct pnfs_block_layoutreturn4 { struct pnfs_block_layoutreturn4 {
pnfs_block_extent4 rel_list<>; /* list of extents the client pnfs_block_extent4 rel_list<>; /* list of extents the client
will no longer use. */ will no longer use. */
} }
The "rel_list" field is an extent list covering regions of the file The "rel_list" field is an extent list covering regions of the file
layout that are no longer needed by the client. Including extents in layout that are no longer needed by the client. Including extents in
skipping to change at page 9, line 9 skipping to change at page 13, line 12
issued before the layout was revoked are rejected at the storage. issued before the layout was revoked are rejected at the storage.
For the block/volume protocol, this is possible by fencing a client For the block/volume protocol, this is possible by fencing a client
with an expired layout timer from the physical storage. Note, with an expired layout timer from the physical storage. Note,
however, that the granularity of this operation can only be at the however, that the granularity of this operation can only be at the
host/logical-unit level. Thus, if one of a client's layouts is host/logical-unit level. Thus, if one of a client's layouts is
unilaterally revoked by the server, it will effectively render unilaterally revoked by the server, it will effectively render
useless *all* of the client's layouts for files located on the useless *all* of the client's layouts for files located on the
storage units comprising the logical volume. This may render useless storage units comprising the logical volume. This may render useless
the client's layouts for files in other filesystems. the client's layouts for files in other filesystems.
2.2.4. Client Copy-on-Write Processing 2.3.4. Client Copy-on-Write Processing
Distinguishing the READ_WRITE_DATA and READ_DATA extent types in Distinguishing the READ_WRITE_DATA and READ_DATA extent types in
combination with the allowed overlap of READ_DATA extents with combination with the allowed overlap of READ_DATA extents with
INVALID_DATA extents allows copy-on-write processing to be done by INVALID_DATA extents allows copy-on-write processing to be done by
pNFS clients. In classic NFS, this operation would be done by the pNFS clients. In classic NFS, this operation would be done by the
server. Since pNFS enables clients to do direct block access, it is server. Since pNFS enables clients to do direct block access, it is
useful for clients to participate in copy-on-write operations. All useful for clients to participate in copy-on-write operations. All
block/volume pNFS clients MUST support this copy-on-write processing. block/volume pNFS clients MUST support this copy-on-write processing.
When a client wishes to write data covered by a READ_DATA extent, it When a client wishes to write data covered by a READ_DATA extent, it
skipping to change at page 10, line 7 skipping to change at page 14, line 9
information back to the server, for writable data, some INVALID_DATA information back to the server, for writable data, some INVALID_DATA
extents may be committed as READ_WRITE_DATA extents, signifying that extents may be committed as READ_WRITE_DATA extents, signifying that
the storage at the corresponding storage_offset values has been the storage at the corresponding storage_offset values has been
stored into and is now to be considered as valid data to be read. stored into and is now to be considered as valid data to be read.
READ_DATA extents are not committed to the server. For extents that READ_DATA extents are not committed to the server. For extents that
the client receives via LAYOUTGET as INVALID_DATA and returns via the client receives via LAYOUTGET as INVALID_DATA and returns via
LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the
READ_DATA mapping for that extent is no longer valid or necessary for READ_DATA mapping for that extent is no longer valid or necessary for
that file. that file.
2.2.5. Extents are Permissions 2.3.5. Extents are Permissions
Layout extents returned to pNFS clients grant permission to read or Layout extents returned to pNFS clients grant permission to read or
write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as
zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, zeroes), READ_WRITE_DATA and INVALID_DATA are read/write,
(INVALID_DATA reads as zeros, any write converts it to (INVALID_DATA reads as zeros, any write converts it to
READ_WRITE_DATA). This is the only client means of obtaining READ_WRITE_DATA). This is the only client means of obtaining
permission to perform direct I/O to storage devices; a pNFS client permission to perform direct I/O to storage devices; a pNFS client
MUST NOT perform direct I/O operations that are not permitted by an MUST NOT perform direct I/O operations that are not permitted by an
extent held by the client. Client adherence to this rule places the extent held by the client. Client adherence to this rule places the
pNFS server in control of potentially conflicting storage device pNFS server in control of potentially conflicting storage device
skipping to change at page 11, line 22 skipping to change at page 15, line 23
layout delegations, I/Os will be issued from the clients that hold layout delegations, I/Os will be issued from the clients that hold
the delegations directly to the storage devices that host the data. the delegations directly to the storage devices that host the data.
These devices have no knowledge of files, mandatory locks, or share These devices have no knowledge of files, mandatory locks, or share
reservations, and are not in a position to enforce such restrictions. reservations, and are not in a position to enforce such restrictions.
For this reason the NFSv4 server MUST NOT grant layout delegations For this reason the NFSv4 server MUST NOT grant layout delegations
that conflict with mandatory locks or share reservations. Further, that conflict with mandatory locks or share reservations. Further,
if a conflicting mandatory lock request or a conflicting open request if a conflicting mandatory lock request or a conflicting open request
arrives at the server, the server MUST recall the part of the layout arrives at the server, the server MUST recall the part of the layout
delegation in conflict with the request before granting the request. delegation in conflict with the request before granting the request.
2.2.6. End-of-file Processing 2.3.6. End-of-file Processing
The end-of-file location can be changed in two ways: implicitly as The end-of-file location can be changed in two ways: implicitly as
the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file,
or explicitly as the result of a SETATTR request. Typically, when a or explicitly as the result of a SETATTR request. Typically, when a
file is truncated by an NFSv4 client via the SETATTR call, the server file is truncated by an NFSv4 client via the SETATTR call, the server
frees any disk blocks belonging to the file which are beyond the new frees any disk blocks belonging to the file which are beyond the new
end-of-file byte, and may write zeros to the portion of the new end- end-of-file byte, and may write zeros to the portion of the new end-
of-file block beyond the new end-of-file byte. These actions render of-file block beyond the new end-of-file byte. These actions render
any pNFS layouts which refer to the blocks that are freed or written any pNFS layouts which refer to the blocks that are freed or written
semantically invalid. Therefore, the server MUST recall from clients semantically invalid. Therefore, the server MUST recall from clients
skipping to change at page 12, line 9 skipping to change at page 16, line 9
processing the SETATTR request. If so, the server MUST recall any processing the SETATTR request. If so, the server MUST recall any
layouts from pNFS clients which refer to the blocks before processing layouts from pNFS clients which refer to the blocks before processing
the truncate. If the server does not free the INVALID_DATA blocks the truncate. If the server does not free the INVALID_DATA blocks
while processing the SETATTR request, it need not recall layouts while processing the SETATTR request, it need not recall layouts
which refer only to the INVALID DATA blocks. which refer only to the INVALID DATA blocks.
When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond
the current end-of-file, or extended explicitly by a SETATTR request, the current end-of-file, or extended explicitly by a SETATTR request,
the server need not recall any portions of any pNFS layouts. the server need not recall any portions of any pNFS layouts.
2.3. Volume Identification
Storage Systems such as storage arrays can have multiple physical
network ports that need not be connected to a common network,
resulting in a pNFS client having simultaneous multipath access to
the same storage volumes via different ports on different networks.
The networks may not even be the same technology - for example,
access to the same volume via both iSCSI and Fibre Channel is
possible, hence network address are difficult to use for volume
identification. For this reason, this pNFS block layout identifies
storage volumes by content, for example providing the means to match
(unique portions of) labels used by volume managers. Any block pNFS
system using this layout MUST support a means of content-based unique
volume identification that can be employed via the data structure
given here.
struct sigComponent { /* disk signature component */
int64_t sig_offset; /* byte offset of component
from start of volume if positive
from end of volume if negative */
length4 sig_length; /* byte length of component */
opaque contents<>; /* contents of this component of the
signature (this is opaque) */
};
enum pnfs_block_volume_type4 {
VOLUME_SIMPLE = 0, /* volume maps to a single LU */
VOLUME_SLICE = 1, /* volume is a slice of another volume */
VOLUME_CONCAT = 2, /* volume is a concatenation of multiple
volumes */
VOLUME_STRIPE = 3, /* volume is striped across multiple
volumes */
};
struct pnfs_block_slice_volume_info4 {
offset4 start; /* block-offset of the start of the
slice */
length4 length; /* length of slice in blocks */
deviceid4 volume; /* volume which is sliced */
};
struct pnfs_block_concat_volume_info4 {
deviceid4 volumes<>; /* volumes which are concatenated */
};
struct pnfs_block_stripe_volume_info4 {
length4 stripe_unit; /* size of stripe */
deviceid4 volumes<>; /* volumes which are striped
across*/
};
union pnfs_block_deviceaddr4 switch (pnfs_block_volume_type4 type) {
case VOLUME_SIMPLE:
pnfs_block_sig_component4 ds<MAX_SIG_COMP>; /*
disk signature */
case VOLUME_SLICE:
pnfs_block_slice_volume_info4 slice_info;
case VOLUME_CONCAT:
pnfs_block_concat_volume_info4 concat_info;
case VOLUME_STRIPE:
pnfs_block_stripe_volume_info4 stripe_info;
default:
void;
};
The "pnfs_block_deviceaddr4" union is a recursive structure that
allows arbitrarily complex nested volume structures to be encoded.
The types of aggregations that are allowed are stripes,
concatenations, and slices. The base case is a volume which maps
simply to one logical unit in the SAN, identified by the
"sigComponent" structure. Each SAN logical unit is content-
identified by a disk signature made up of extents within blocks and
contents that must match. The "pnfs_block_deviceaddr4" union is
returned by the server as the storage-protocol-specific opaque field
in the "pnfs_deviceaddr4" structure, in response to the GETDEVICEINFO
or GETDEVICELIST operations. Note that the opaque "contents" field
in the "sigComponent" structure MUST NOT be interpreted as a zero-
terminated string, as it may contain embedded zero-valued octets. It
contains exactly sig_length octets. There are no restrictions on
alignment (e.g., neither sig_offset nor sig_length are required to be
multiples of 4). The sig_offset is a signed quantity which when
positive represents an offset from the start of the volume, and when
negative represents an offset from the end of the volume.
Negative offsets are permitted in order to simplify the client
implementation on systems where the device label is found at a fixed
offset from the end of the volume. In the absence of a negative
offset, imagine a system where the client has access to n volumes and
a file system is striped across m volumes. If those m disks are all
different sizes, then in the worst case, the client would need to
read n times m blocks in order to properly identify the volumes used
by a layout. If the server uses negative offsets to describe the
signature, then the client and server MUST NOT see different volume
sizes. Negative offsets SHOULD NOT be used in systems that
dynamically resize volumes unless care is taken to ensure that the
device label is always present at the offset from the end of the
volume as seen by the clients.
2.4. Crash Recovery Issues 2.4. Crash Recovery Issues
When the server crashes while the client holds a writable layout, and When the server crashes while the client holds a writable layout, and
the client has written data to blocks covered by the layout, and the the client has written data to blocks covered by the layout, and the
blocks are still in the INVALID_DATA state, the client has two blocks are still in the INVALID_DATA state, the client has two
options for recovery. If the data that has been written to these options for recovery. If the data that has been written to these
blocks is still cached by the client, the client can simply re-write blocks is still cached by the client, the client can simply re-write
the data via NFSv4, once the server has come back online. However, the data via NFSv4, once the server has come back online. However,
if the data is no longer in the client's cache, the client MUST NOT if the data is no longer in the client's cache, the client MUST NOT
attempt to source the data from the data servers. Instead, it should attempt to source the data from the data servers. Instead, it should
skipping to change at page 18, line 17 skipping to change at page 19, line 20
READ_WRITE_DATA in pnfs_block_layoutupdate requests. Updated section READ_WRITE_DATA in pnfs_block_layoutupdate requests. Updated section
2.2.5 to specify that data corruption can occur; that requests, not 2.2.5 to specify that data corruption can occur; that requests, not
the client, are rejected; that server "SHOULD" recall conflicting the client, are rejected; that server "SHOULD" recall conflicting
portions of layouts. Clarified that unilateral revocation may affect portions of layouts. Clarified that unilateral revocation may affect
layouts from other filesystems. Changed signature offset to be a layouts from other filesystems. Changed signature offset to be a
signed quantity to allow for labels at a fixed location from the end signed quantity to allow for labels at a fixed location from the end
of a volume. Changed all data structures to have suffix "4", changed of a volume. Changed all data structures to have suffix "4", changed
extentState4 to pnfs_block_extent_state4 and sigComponent to extentState4 to pnfs_block_extent_state4 and sigComponent to
pnfs_block_sig_component4, to conform to [NFSv4.1]. pnfs_block_sig_component4, to conform to [NFSv4.1].
03: Moved sections GETDEVICELIST and GETDEVICEINFO earlier in
document for better readability. Added pnfs_block_simple_volume4
data structure, and added volume_id fields to all pnfs_block volume
info data structures.
7. Acknowledgments 7. Acknowledgments
This draft draws extensively on the authors' familiarity with the This draft draws extensively on the authors' familiarity with the
mapping functionality and protocol in EMC's HighRoad system mapping functionality and protocol in EMC's HighRoad system
[HighRoad]. The protocol used by HighRoad is called FMP (File [HighRoad]. The protocol used by HighRoad is called FMP (File
Mapping Protocol); it is an add-on protocol that runs in parallel Mapping Protocol); it is an add-on protocol that runs in parallel
with filesystem protocols such as NFSv3 to provide pNFS-like with filesystem protocols such as NFSv3 to provide pNFS-like
functionality for block/volume storage. While drawing on HighRoad functionality for block/volume storage. While drawing on HighRoad
FMP, the data structures and functional considerations in this draft FMP, the data structures and functional considerations in this draft
differ in significant ways, based on lessons learned and the differ in significant ways, based on lessons learned and the
 End of changes. 25 change blocks. 
177 lines changed or deleted 243 lines changed or added

This html diff was produced by rfcdiff 1.33. The latest version is available from http://tools.ietf.org/tools/rfcdiff/