draft-ietf-nfsv4-pnfs-block-10.txt   draft-ietf-nfsv4-pnfs-block-11.txt 
NFSv4 Working Group D. Black NFSv4 Working Group D. Black
Internet Draft S. Fridella Internet Draft S. Fridella
Expires: May 25, 2009 J. Glasgow Expires: June 12, 2009 J. Glasgow
Intended Status: Proposed Standard EMC Corporation Intended Status: Proposed Standard EMC Corporation
November 25, 2008 December 9, 2008
pNFS Block/Volume Layout pNFS Block/Volume Layout
draft-ietf-nfsv4-pnfs-block-10.txt draft-ietf-nfsv4-pnfs-block-11.txt
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that By submitting this Internet-Draft, each author represents that
any applicable patent or other IPR claims of which he or she is any applicable patent or other IPR claims of which he or she is
aware have been or will be disclosed, and any of which he or she aware have been or will be disclosed, and any of which he or she
becomes aware will be disclosed, in accordance with Section 6 of becomes aware will be disclosed, in accordance with Section 6 of
BCP 79. BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
skipping to change at page 2, line 13 skipping to change at page 2, line 13
based storage. based storage.
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC-2119 [RFC2119]. document are to be interpreted as described in RFC-2119 [RFC2119].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction.................................................. 3
1.1. General Definitions.......................................3 1.1. General Definitions ..................................... 3
1.2. XDR Description of NFSv4.1 block layout...................4 1.2. XDR Description of NFSv4.1 block layout.................. 4
2. Block Layout Description.......................................5 2. Block Layout Description ..................................... 5
2.1. Background and Architecture...............................5 2.1. Background and Architecture ............................. 5
2.2. GETDEVICELIST and GETDEVICEINFO...........................6 2.2. GETDEVICELIST and GETDEVICEINFO.......................... 7
2.2.1. Volume Identification................................6 2.2.1. Volume Identification............................... 7
2.2.2. Volume Topology......................................7 2.2.2. Volume Topology..................................... 8
2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........10 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........11
2.3. Data Structures: Extents and Extent Lists................10 2.3. Data Structures: Extents and Extent Lists................11
2.3.1. Layout Requests and Extent Lists....................13 2.3.1. Layout Requests and Extent Lists....................14
2.3.2. Layout Commits......................................14 2.3.2. Layout Commits .....................................15
2.3.3. Layout Returns......................................15 2.3.3. Layout Returns .....................................16
2.3.4. Client Copy-on-Write Processing.....................15 2.3.4. Client Copy-on-Write Processing.....................16
2.3.5. Extents are Permissions.............................17 2.3.5. Extents are Permissions.............................18
2.3.6. End-of-file Processing..............................18 2.3.6. End-of-file Processing .............................19
2.3.7. Layout Hints........................................19 2.3.7. Layout Hints........................................20
2.3.8. Client Fencing......................................19 2.3.8. Client Fencing .....................................20
2.4. Crash Recovery Issues....................................21 2.4. Crash Recovery Issues....................................22
2.5. Recalling resources: CB_RECALL_ANY.......................22 2.5. Recalling resources: CB_RECALL_ANY ......................23
2.6. Transient and Permanent Errors...........................22 2.6. Transient and Permanent Errors...........................23
3. Security Considerations.......................................23 3. Security Considerations.......................................24
4. Conclusions...................................................24 4. Conclusions...................................................26
5. IANA Considerations...........................................24 5. IANA Considerations...........................................26
6. Acknowledgments...............................................25 6. Acknowledgments...............................................26
7. References....................................................25 7. References....................................................26
7.1. Normative References.....................................25 7.1. Normative References.....................................26
7.2. Informative References...................................25 7.2. Informative References...................................27
Author's Addresses...............................................26 Author's Addresses...............................................27
Intellectual Property Statement..................................26 Intellectual Property Statement..................................27
Disclaimer of Validity...........................................27 Disclaimer of Validity...........................................28
Copyright Statement..............................................27 Copyright Statement .............................................28
Acknowledgment...................................................27 Acknowledgment...................................................28
1. Introduction 1. Introduction
Figure 1 shows the overall architecture of a pNFS system: Figure 1 shows the overall architecture of a Parallel NFS (pNFS)
system:
+-----------+ +-----------+
|+-----------+ +-----------+ |+-----------+ +-----------+
||+-----------+ | | ||+-----------+ | |
||| | NFSv4.1 + pNFS | | ||| | NFSv4.1 + pNFS | |
+|| Clients |<------------------------------>| Server | +|| Clients |<------------------------------>| Server |
+| | | | +| | | |
+-----------+ | | +-----------+ | |
||| +-----------+ ||| +-----------+
||| | ||| |
||| | ||| |
||| +-----------+ | ||| Storage +-----------+ |
||| |+-----------+ | ||| Protocol |+-----------+ |
||+----------------||+-----------+ | ||+----------------||+-----------+ Control |
|+-----------------||| | | |+-----------------||| | Protocol|
+------------------+|| Storage |------------+ +------------------+|| Storage |------------+
+| Systems | +| Systems |
+-----------+ +-----------+
Figure 1 pNFS Architecture Figure 1 pNFS Architecture
The overall approach is that pNFS-enhanced clients obtain sufficient The overall approach is that pNFS-enhanced clients obtain sufficient
information from the server to enable them to access the underlying information from the server to enable them to access the underlying
storage (on the Storage Systems) directly. See the pNFS portion of storage (on the Storage Systems) directly. See the pNFS portion of
[NFSV4.1] for more details. This draft is concerned with access from [NFSV4.1] for more details. This draft is concerned with access from
pNFS clients to Storage Systems over storage protocols based on pNFS clients to Storage Systems over storage protocols based on
blocks and volumes, such as the SCSI protocol family (e.g., parallel blocks and volumes, such as the SCSI protocol family (e.g., parallel
SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of
storage is referred to as block/volume storage. While the Server to storage is referred to as block/volume storage. While the Server to
Storage System protocol is not of concern for interoperability here, Storage System protocol, called the "Control Protocol", is not of
it will typically also be a block/volume protocol when clients use concern for interoperability here, it will typically also be a
block/ volume protocols. block/volume protocol when clients use block/ volume protocols.
1.1. General Definitions 1.1. General Definitions
The following definitions are provided for the purpose of providing The following definitions are provided for the purpose of providing
an appropriate context for the reader. an appropriate context for the reader.
Byte Byte
This document defines a byte as an octet, i.e. a datum exactly 8 This document defines a byte as an octet, i.e. a datum exactly 8
bits in length. bits in length.
Client Client
The "client" is the entity that accesses the NFS server's The "client" is the entity that accesses the NFS server's
resources. The client may be an application which contains the resources. The client may be an application which contains the
logic to access the NFS server directly. The client may also be logic to access the NFS server directly. The client may also be
the traditional operating system client that provides remote file the traditional operating system client that provides remote file
system services for a set of applications. system services for a set of applications.
skipping to change at page 4, line 19 skipping to change at page 4, line 21
logic to access the NFS server directly. The client may also be logic to access the NFS server directly. The client may also be
the traditional operating system client that provides remote file the traditional operating system client that provides remote file
system services for a set of applications. system services for a set of applications.
Server Server
The "Server" is the entity responsible for coordinating client The "Server" is the entity responsible for coordinating client
access to a set of file systems and is identified by a Server access to a set of file systems and is identified by a Server
owner. owner.
1.2. XDR Description 1.2. Code Components Licensing Notice
The external data representation (XDR) description and scripts for
extracting the XDR description are Code Components as described in
Section 4 of "Legal Provisions Relating to IETF Documents" [LEGAL].
These Code Components are licensed according to the terms of Section
4 of "Legal Provisions Relating to IETF Documents".
1.3. XDR Description
This document contains the XDR ([XDR]) description of the NFSv4.1 This document contains the XDR ([XDR]) description of the NFSv4.1
block layout protocol. The XDR description is embedded in this block layout protocol. The XDR description is embedded in this
document in a way that makes it simple for the reader to extract into document in a way that makes it simple for the reader to extract into
a ready to compile form. The reader can feed this document into the a ready to compile form. The reader can feed this document into the
following shell script to produce the machine readable XDR following shell script to produce the machine readable XDR
description of the NFSv4.1 block layout: description of the NFSv4.1 block layout:
#!/bin/sh #!/bin/sh
grep '^ *///' | sed 's?^ *///??' grep '^ *///' | sed 's?^ */// ??' | sed 's?^ *///$??' $*
I.e. if the above script is stored in a file called "extract.sh", and I.e. if the above script is stored in a file called "extract.sh", and
this document is in a file called "spec.txt", then the reader can do: this document is in a file called "spec.txt", then the reader can do:
sh extract.sh < spec.txt > nfs4_block_layout_spec.x sh extract.sh < spec.txt > nfs4_block_layout_spec.x
The effect of the script is to remove both leading white space and a The effect of the script is to remove both leading white space and a
sentinel sequence of "///" from each matching line. sentinel sequence of "///" from each matching line.
The embedded XDR file header follows, with subsequent pieces embedded The embedded XDR file header follows, with subsequent pieces embedded
throughout the document: throughout the document:
////* ////*
/// * This file was machine generated for /// * This code was derived from IETF RFC &rfc.number.
/// * draft-ietf-nfsv4-pnfs-block-09 [[RFC Editor: please insert RFC number if needed]]
/// * Last updated Wed Jun 11 10:57:06 EST 2008 /// * Please reproduce this note if possible.
/// */
////*
/// * Copyright (C) The IETF Trust (2007-2008)
/// * All Rights Reserved.
/// *
/// * Copyright (C) The Internet Society (1998-2006).
/// * All Rights Reserved.
/// */ /// */
/// ///
////* ////*
/// * nfs4_block_layout_prot.x /// * nfs4_block_layout_prot.x
/// */ /// */
/// ///
///%#include "nfsv41.h" ///%#include "nfsv41.h"
/// ///
The XDR code contained in this document depends on types from The XDR code contained in this document depends on types from
skipping to change at page 6, line 27 skipping to change at page 6, line 24
holes (read as zero, can be written by client). This draft also holes (read as zero, can be written by client). This draft also
supports client participation in copy on write (e.g. for file systems supports client participation in copy on write (e.g. for file systems
with snapshots) by providing both read-only and un-initialized with snapshots) by providing both read-only and un-initialized
storage for the same range in a layout. Reads are initially storage for the same range in a layout. Reads are initially
performed on the read-only storage, with writes going to the un- performed on the read-only storage, with writes going to the un-
initialized storage. After the first write that initializes the un- initialized storage. After the first write that initializes the un-
initialized storage, all reads are performed to that now-initialized initialized storage, all reads are performed to that now-initialized
writeable storage, and the corresponding read-only storage is no writeable storage, and the corresponding read-only storage is no
longer used. longer used.
The block/volume layout solution expands the security
responsibilities of the pNFS clients and there are a number of
environments where the mandatory to implement security properties for
NFS cannot be satisfied. The additional security responsibilities of
the client follow, and a full discussion is present in Section 3
"Security Considerations".
o Typically, storage array network (SAN) disk arrays and SAN
protocols provide access control mechanisms (e.g., logical unit
number mapping and/or masking) which operate at the granularity of
individual hosts, not individual blocks. For this reason, block-
based protection must be provided by the client software.
o Similarly, SAN disk arrays and SAN protocols typically are not be
able to validate NFS locks that apply to file regions. For
instance, if a file is covered by a mandatory read-only lock, the
server can ensure that only readable layouts for the file are
granted to pNFS clients. However, it is up to each pNFS client to
ensure that the readable layout is used only to service read
requests, and not to allow writes to the existing parts of the
file.
Since block/volume storage systems are generally not capable of
enforcing such file-based security, in environments where pNFS
clients cannot be trusted to enforce such policies, pNFS block/volume
storage layouts SHOULD NOT be used.
2.2. GETDEVICELIST and GETDEVICEINFO 2.2. GETDEVICELIST and GETDEVICEINFO
2.2.1. Volume Identification 2.2.1. Volume Identification
Storage Systems such as storage arrays can have multiple physical Storage Systems such as storage arrays can have multiple physical
network ports that need not be connected to a common network, network ports that need not be connected to a common network,
resulting in a pNFS client having simultaneous multipath access to resulting in a pNFS client having simultaneous multipath access to
the same storage volumes via different ports on different networks. the same storage volumes via different ports on different networks.
The networks may not even be the same technology - for example, The networks may not even be the same technology - for example,
access to the same volume via both iSCSI and Fibre Channel is access to the same volume via both iSCSI and Fibre Channel is
possible, hence network addresses are difficult to use for volume possible, hence network addresses are difficult to use for volume
identification. For this reason, this pNFS block layout identifies identification. For this reason, this pNFS block layout identifies
storage volumes by content, for example providing the means to match storage volumes by content, for example providing the means to match
(unique portions of) labels used by volume managers. Any block pNFS (unique portions of) labels used by volume managers. Volume
identification is performed by matching one or more opaque byte
sequences to specific parts of the stored data. Any block pNFS
system using this layout MUST support a means of content-based unique system using this layout MUST support a means of content-based unique
volume identification that can be employed via the data structure volume identification that can be employed via the data structure
given here. given here.
///struct pnfs_block_sig_component4 { /* disk signature component */ ///struct pnfs_block_sig_component4 { /* disk signature component */
/// int64_t bsc_sig_offset; /* byte offset of component /// int64_t bsc_sig_offset; /* byte offset of component
/// on volume*/ /// on volume*/
/// opaque bsc_contents<>; /* contents of this component /// opaque bsc_contents<>; /* contents of this component
/// of the signature */ /// of the signature */
///}; ///};
skipping to change at page 7, line 38 skipping to change at page 8, line 12
ensure that the device label is always present at the offset from the ensure that the device label is always present at the offset from the
end of the volume as seen by the clients. end of the volume as seen by the clients.
A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined
below) signature components. The client MUST NOT assume that all below) signature components. The client MUST NOT assume that all
signature components are colocated within a single sector on a block signature components are colocated within a single sector on a block
device. device.
The pNFS client block layout driver uses this volume identification The pNFS client block layout driver uses this volume identification
to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to
its local view of a LUN. its local view of a logical unit number (LUN).
2.2.2. Volume Topology 2.2.2. Volume Topology
The pNFS block server volume topology is expressed as an arbitrary The pNFS block server volume topology is expressed as an arbitrary
combination of base volume types enumerated in the following data combination of base volume types enumerated in the following data
structures. The individual components of the topology are contained structures. The individual components of the topology are contained
in an array and components may refer to other components by using in an array and components may refer to other components by using
array indices. array indices.
///enum pnfs_block_volume_type4 { ///enum pnfs_block_volume_type4 {
skipping to change at page 11, line 33 skipping to change at page 12, line 33
/// ///
///struct pnfs_block_extent4 { ///struct pnfs_block_extent4 {
/// deviceid4 bex_vol_id; /* id of logical volume on /// deviceid4 bex_vol_id; /* id of logical volume on
/// which extent of file is /// which extent of file is
/// stored. */ /// stored. */
/// offset4 bex_file_offset; /* the starting byte offset in /// offset4 bex_file_offset; /* the starting byte offset in
/// the file */ /// the file */
/// length4 bex_length; /* the size in bytes of the /// length4 bex_length; /* the size in bytes of the
/// extent */ /// extent */
/// offset4 bex_storage_offset;/* the starting byte offset in /// offset4 bex_storage_offset; /* the starting byte offset
/// the volume */ /// in the volume */
/// pnfs_block_extent_state4 bex_state; /// pnfs_block_extent_state4 bex_state;
/// /* the state of this extent */ /// /* the state of this extent */
///}; ///};
/// ///
////* block layout specific type for loc_body */ ////* block layout specific type for loc_body */
///struct pnfs_block_layout4 { ///struct pnfs_block_layout4 {
/// pnfs_block_extent4 blo_extents<>; /// pnfs_block_extent4 blo_extents<>;
/// /* extents which make up this /// /* extents which make up this
/// layout. */ /// layout. */
///}; ///};
skipping to change at page 19, line 45 skipping to change at page 20, line 45
to unilaterally revoke extents from one client in order to transfer to unilaterally revoke extents from one client in order to transfer
the extents to another client. The pNFS server implementation MUST the extents to another client. The pNFS server implementation MUST
ensure that when resources are transferred to another client, they ensure that when resources are transferred to another client, they
are not used by the client originally owning them, and this must be are not used by the client originally owning them, and this must be
ensured against any possible combination of partitions and delays ensured against any possible combination of partitions and delays
among all of the participants to the protocol (server, storage and among all of the participants to the protocol (server, storage and
client). Two approaches to guaranteeing this isolation are possible client). Two approaches to guaranteeing this isolation are possible
and are discussed below. and are discussed below.
One implementation choice for fencing the block client from the block One implementation choice for fencing the block client from the block
storage is the use of LUN (Logical Unit Number) masking or mapping at storage is the use of LUN masking or mapping at the storage systems
the storage systems or storage area network to disable access by the or storage area network to disable access by the client to be
client to be isolated. This requires server access to a management isolated. This requires server access to a management interface for
interface for the storage system and authorization to perform LUN the storage system and authorization to perform LUN masking and
masking and management operations. For example, SMI-S [SMIS] management operations. For example, SMI-S [SMIS] provides a means to
provides a means to discover and mask LUNs, including a means of discover and mask LUNs, including a means of associating clients with
associating clients with the necessary World Wide Names or Initiator the necessary World Wide Names or Initiator names to be masked.
names to be masked.
In the absence of support for LUN masking, the server has to rely on In the absence of support for LUN masking, the server has to rely on
the clients to implement a timed lease I/O fencing mechanism. the clients to implement a timed lease I/O fencing mechanism.
Because clients do not know if the server is using LUN masking, in Because clients do not know if the server is using LUN masking, in
all cases the client MUST implement timed lease fencing. In timed all cases the client MUST implement timed lease fencing. In timed
lease fencing we define two time periods, the first, "lease_time" is lease fencing we define two time periods, the first, "lease_time" is
the length of a lease as defined by the server's lease_time attribute the length of a lease as defined by the server's lease_time attribute
(see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum (see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum
time it can take for a client I/O to the storage system to either time it can take for a client I/O to the storage system to either
complete or fail; this value is often 30 seconds or 60 seconds, but complete or fail; this value is often 30 seconds or 60 seconds, but
skipping to change at page 20, line 49 skipping to change at page 21, line 48
that client. Thus the server, by returning either NFS4ERR_INVAL or that client. Thus the server, by returning either NFS4ERR_INVAL or
NFS4_OK determines whether or not a client with a large, or an NFS4_OK determines whether or not a client with a large, or an
unbounded maximum I/O time may use pNFS. unbounded maximum I/O time may use pNFS.
Using the lease time and the maximum i/o time values, we specify the Using the lease time and the maximum i/o time values, we specify the
behavior of the client and server as follows. behavior of the client and server as follows.
When a client receives layout information via a LAYOUTGET operation, When a client receives layout information via a LAYOUTGET operation,
those layouts are valid for at most "lease_time" seconds from when those layouts are valid for at most "lease_time" seconds from when
the server granted them. A layout is renewed by any successful the server granted them. A layout is renewed by any successful
SEQUEUNCE operation, or whenever a new stateid is created or updated SEQUENCE operation, or whenever a new stateid is created or updated
(see the section "Lease Renewal" of [NFSV4.1]). If the layout lease (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease
is not renewed prior to expiration, the client MUST cease to use the is not renewed prior to expiration, the client MUST cease to use the
layout after "lease_time" seconds from when it either sent the layout after "lease_time" seconds from when it either sent the
original LAYOUTGET command, or sent the last operation renewing the original LAYOUTGET command, or sent the last operation renewing the
lease. In other words, the client may not issue any I/O to blocks lease. In other words, the client may not issue any I/O to blocks
specified by an expired layout. In the presence of large specified by an expired layout. In the presence of large
communication delays between the client and server it is even communication delays between the client and server it is even
possible for the lease to expire prior to the server response possible for the lease to expire prior to the server response
arriving at the client. In such a situation the client MUST NOT use arriving at the client. In such a situation the client MUST NOT use
the expired layouts, and SHOULD revert to using standard NFSv41 READ the expired layouts, and SHOULD revert to using standard NFSv41 READ
skipping to change at page 21, line 37 skipping to change at page 22, line 36
In the absence of known two way communication between the client and In the absence of known two way communication between the client and
the server on the fore channel, the server must wait for at least the the server on the fore channel, the server must wait for at least the
time period "lease_time" plus "blh_maximum_io_time" before time period "lease_time" plus "blh_maximum_io_time" before
transferring layouts from the original client to any other client. transferring layouts from the original client to any other client.
The server, like the client, must take a conservative approach, and The server, like the client, must take a conservative approach, and
start the lease expiration timer from the time that it received the start the lease expiration timer from the time that it received the
operation which last renewed the lease. operation which last renewed the lease.
2.4. Crash Recovery Issues 2.4. Crash Recovery Issues
A critical requirement in crash recovery is that both the client and
the server know when the other has failed. Additionally, it is
required that a client sees a consistent view of data across server
restarts. These requirements and a full discussion of crash recovery
issues are covered in the "Crash Recovery" section of the NFSv41
specification [NFSV4.1]. This document contains additional crash
recovery material specific only to the block/volume layout.
When the server crashes while the client holds a writable layout, and When the server crashes while the client holds a writable layout, and
the client has written data to blocks covered by the layout, and the the client has written data to blocks covered by the layout, and the
blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has
two options for recovery. If the data that has been written to these two options for recovery. If the data that has been written to these
blocks is still cached by the client, the client can simply re-write blocks is still cached by the client, the client can simply re-write
the data via NFSv4, once the server has come back online. However, the data via NFSv4, once the server has come back online. However,
if the data is no longer in the client's cache, the client MUST NOT if the data is no longer in the client's cache, the client MUST NOT
attempt to source the data from the data servers. Instead, it should attempt to source the data from the data servers. Instead, it should
attempt to commit the blocks in question to the server during the attempt to commit the blocks in question to the server during the
server's recovery grace period, by sending a LAYOUTCOMMIT with the server's recovery grace period, by sending a LAYOUTCOMMIT with the
skipping to change at page 24, line 6 skipping to change at page 25, line 13
contrast, an environment where client-side protection may suffice contrast, an environment where client-side protection may suffice
consists of co-located clients, server and storage systems in a consists of co-located clients, server and storage systems in a
datacenter with a physically isolated SAN under control of a single datacenter with a physically isolated SAN under control of a single
system administrator or small group of system administrators. system administrator or small group of system administrators.
This also has implications for some NFSv4 functionality outside pNFS. This also has implications for some NFSv4 functionality outside pNFS.
For instance, if a file is covered by a mandatory read-only lock, the For instance, if a file is covered by a mandatory read-only lock, the
server can ensure that only readable layouts for the file are granted server can ensure that only readable layouts for the file are granted
to pNFS clients. However, it is up to each pNFS client to ensure to pNFS clients. However, it is up to each pNFS client to ensure
that the readable layout is used only to service read requests, and that the readable layout is used only to service read requests, and
not to allow writes to the existing parts of the file. Since not to allow writes to the existing parts of the file. Similarly,
block/volume storage systems are generally not capable of enforcing block/volume storage devices are unable to validate NFS Access
such file-based security, in environments where pNFS clients cannot Control Lists (ACLs) and file open modes, so the client must enforce
be trusted to enforce such policies, pNFS block/volume storage the policies before sending a read or write request to the storage
layouts SHOULD NOT be used. device. Since block/volume storage systems are generally not capable
of enforcing such file-based security, in environments where pNFS
clients cannot be trusted to enforce such policies, pNFS block/volume
storage layouts SHOULD NOT be used.
Access to block/volume storage is logically at a lower layer of the Access to block/volume storage is logically at a lower layer of the
I/O stack than NFSv4, and hence NFSv4 security is not directly I/O stack than NFSv4, and hence NFSv4 security is not directly
applicable to protocols that access such storage directly. Depending applicable to protocols that access such storage directly. Depending
on the protocol, some of the security mechanisms provided by NFSv4 on the protocol, some of the security mechanisms provided by NFSv4
(e.g., encryption, cryptographic integrity) may not be available, or (e.g., encryption, cryptographic integrity) may not be available, or
may be provided via different means. At one extreme, pNFS with may be provided via different means. At one extreme, pNFS with
block/volume storage can be used with storage access protocols (e.g., block/volume storage can be used with storage access protocols (e.g.,
parallel SCSI) that provide essentially no security functionality. parallel SCSI) that provide essentially no security functionality.
At the other extreme, pNFS may be used with storage protocols such as At the other extreme, pNFS may be used with storage protocols such as
skipping to change at page 25, line 27 skipping to change at page 26, line 37
on-write is based on text and ideas contributed by Craig Everhart. on-write is based on text and ideas contributed by Craig Everhart.
Andy Adamson, Ben Campbell, Richard Chandler, Benny Halevy, Fredric Andy Adamson, Ben Campbell, Richard Chandler, Benny Halevy, Fredric
Isaman, and Mario Wurzl all helped to review drafts of this Isaman, and Mario Wurzl all helped to review drafts of this
specification. specification.
7. References 7. References
7.1. Normative References 7.1. Normative References
[LEGAL] IETF Trust, "Legal Provisions Relating to IETF Documents",
URL http://trustee.ietf.org/docs/IETF-Trust-License-
Policy.pdf, November 2008.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor
Version 1", draft-ietf-nfsv4-minorversion1-26.txt, Internet Version 1", draft-ietf-nfsv4-minorversion1-26.txt, Internet
Draft, September 2008. Draft, September 2008.
[XDR] Eisler, M., "XDR: External Data Representation Standard", [XDR] Eisler, M., "XDR: External Data Representation Standard",
STD 67, RFC 4506, May 2006. STD 67, RFC 4506, May 2006.
 End of changes. 20 change blocks. 
75 lines changed or deleted 119 lines changed or added

This html diff was produced by rfcdiff 1.35. The latest version is available from http://tools.ietf.org/tools/rfcdiff/