NFSv4 Working Group                                      David L. Black 
Internet Draft                                         Stephen Fridella 
Expires: April 2008                                       Jason Glasgow 
Intended Status: Proposed Standard                      EMC Corporation 
                                                        October 3, 2007 
                                    
 
                                      
                         pNFS Block/Volume Layout 
                    draft-ietf-nfsv4-pnfs-block-04.txt 


Status of this Memo 

   By submitting this Internet-Draft, each author represents that       
   any applicable patent or other IPR claims of which he or she is       
   aware have been or will be disclosed, and any of which he or she       
   becomes aware will be disclosed, in accordance with Section 6 of       
   BCP 79. 

   Internet-Drafts are working documents of the Internet Engineering 
   Task Force (IETF), its areas, and its working groups.  Note that 
   other groups may also distribute working documents as Internet-
   Drafts. 

   Internet-Drafts are draft documents valid for a maximum of six months 
   and may be updated, replaced, or obsoleted by other documents at any 
   time.  It is inappropriate to use Internet-Drafts as reference 
   material or to cite them other than as "work in progress." 

   The list of current Internet-Drafts can be accessed at 
        http://www.ietf.org/ietf/1id-abstracts.txt 

   The list of Internet-Draft Shadow Directories can be accessed at 
        http://www.ietf.org/shadow.html 

   This Internet-Draft will expire in September 2007. 

Abstract 

   Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 
   file data on the storage used by the NFSv4 server.  This ability to 
   bypass the server for data access can increase both performance and 
   parallelism, but requires additional client functionality for data 
   access, some of which is dependent on the class of storage used.  The 
   main pNFS operations draft specifies storage-class-independent 
   extensions to NFS; this draft specifies the additional extensions 

 
 
 
Black                     Expires March 2008                   [Page 1] 
 






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   (primarily data structures) for use of pNFS with block and volume 
   based storage. 

Conventions used in this document 

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 
   document are to be interpreted as described in RFC-2119 [RFC2119]. 

Table of Contents 

   1. Introduction........................................... 3 
   2. Block Layout Description ................................ 3 
      2.1. Background and Architecture ......................... 3 
      2.2. GETDEVICELIST and GETDEVICEINFO...................... 4 
         2.2.1. Volume Identification.......................... 4 
         2.2.2. Volume Topology................................ 5 
         2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4......... 8 
      2.3. Data Structures: Extents and Extent Lists............. 9 
         2.3.1. Layout Requests and Extent Lists.................11 
         2.3.2. Layout Commits ................................11 
         2.3.3. Layout Returns ................................12 
         2.3.4. Client Copy-on-Write Processing..................13 
         2.3.5. Extents are Permissions.........................14 
         2.3.6. End-of-file Processing .........................15 
         2.3.7. Client Fencing ................................16 
      2.4. Crash Recovery Issues...............................18 
      2.5. Recalling resources: CB_RECALL_ANY ...................18 
      2.6. Transient and Permanent Errors.......................19 
   3. Security Considerations.................................19 
   4. Conclusions............................................21 
   5. IANA Considerations.....................................21 
   6. Revision History .......................................21 
   7. Acknowledgments........................................22 
   8. References.............................................23 
      8.1. Normative References................................23 
      8.2. Informative References..............................23 
   Author's Addresses........................................23 
   Intellectual Property Statement.............................24 
   Disclaimer of Validity.....................................24 
   Copyright Statement .......................................25 
   Acknowledgment............................................25 
    




 
 
Black                    Expires August 2007                   [Page 2] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

    
1. Introduction 

   Figure 1 shows the overall architecture of a pNFS system: 

       +-----------+                                
       |+-----------+                                 +-----------+ 
       ||+-----------+                                |           | 
       |||           |        NFSv4 + pNFS            |           | 
       +||  Clients  |<------------------------------>|   Server  | 
        +|           |                                |           | 
         +-----------+                                |           | 
              |||                                     +-----------+ 
              |||                                           | 
              |||                                           | 
              |||                +-----------+              | 
              |||                |+-----------+             | 
              ||+----------------||+-----------+            | 
              |+-----------------|||           |            | 
              +------------------+||  Storage  |------------+ 
                                  +|  Systems  | 
                                   +-----------+ 
    
                        Figure 1 pNFS Architecture 

   The overall approach is that pNFS-enhanced clients obtain sufficient 
   information from the server to enable them to access the underlying 
   storage (on the Storage Systems) directly.  See the pNFS portion of 
   [NFSV4.1] for more details.  This draft is concerned with access from 
   pNFS clients to Storage Systems over storage protocols based on 
   blocks and volumes, such as the SCSI protocol family (e.g., parallel 
   SCSI, FCP for Fibre Channel, iSCSI, SAS).  This class of storage is 
   referred to as block/volume storage.  While the Server to Storage 
   System protocol is not of concern for interoperability here, it will 
   typically also be a block/volume protocol when clients use block/ 
   volume protocols. 

2. Block Layout Description 

2.1. Background and Architecture 

   The fundamental storage abstraction supported by block/volume storage 
   is a storage volume consisting of a sequential series of fixed size 
   blocks.  This can be thought of as a logical disk; it may be realized 
   by the Storage System as a physical disk, a portion of a physical 
   disk or something more complex (e.g., concatenation, striping, RAID, 

 
 
Black                    Expires August 2007                   [Page 3] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   and combinations thereof) involving multiple physical disks or 
   portions thereof. 

   A pNFS layout for this block/volume class of storage is responsible 
   for mapping from an NFS file (or portion of a file) to the blocks of 
   storage volumes that contain the file.  The blocks are expressed as 
   extents with 64 bit offsets and lengths using the existing NFSv4 
   offset4 and length4 types.  Clients must be able to perform I/O to 
   the block extents without affecting additional areas of storage 
   (especially important for writes), therefore extents MUST be aligned 
   to 512-octet boundaries, and SHOULD be aligned to the block size used 
   by the NFSv4 server in managing the actual filesystem (4 kilobytes 
   and 8 kilobytes are common block sizes).  This block size is 
   available as the NFSv4.1 layout_blksize attribute. [NFSV4.1] 

   The pNFS operation for requesting a layout (LAYOUTGET) includes the 
   "layoutiomode4 loga_iomode" argument which indicates whether the 
   requested layout is for read-only use or read-write use.  A read-only 
   layout may contain holes that are read as zero, whereas a read-write 
   layout will contain allocated, but un-initialized storage in those 
   holes (read as zero, can be written by client).  This draft also 
   supports client participation in copy on write by providing both 
   read-only and un-initialized storage for the same range in a layout.  
   Reads are initially performed on the read-only storage, with writes 
   going to the un-initialized storage.  After the first write that 
   initializes the un-initialized storage, all reads are performed to 
   that now-initialized writeable storage, and the corresponding read-
   only storage is no longer used. 

2.2. GETDEVICELIST and GETDEVICEINFO 

2.2.1. Volume Identification 

   Storage Systems such as storage arrays can have multiple physical 
   network ports that need not be connected to a common network, 
   resulting in a pNFS client having simultaneous multipath access to 
   the same storage volumes via different ports on different networks.  
   The networks may not even be the same technology - for example, 
   access to the same volume via both iSCSI and Fibre Channel is 
   possible, hence network address are difficult to use for volume 
   identification.  For this reason, this pNFS block layout identifies 
   storage volumes by content, for example providing the means to match 
   (unique portions of) labels used by volume managers.  Any block pNFS 
   system using this layout MUST support a means of content-based unique 
   volume identification that can be employed via the data structure 
   given here. 

 
 
Black                    Expires August 2007                   [Page 4] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   struct pnfs_block_sig_component4 {  /*  disk signature component */ 

      offset4 sig_offset;        /* octet offset of component 
                                    within signature block */ 

      opaque  contents<>;        /* contents of this component of the 
                                    signature (this is opaque) */ 

   }; 

   Note that the opaque "contents" field in the 
   "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 
   zero-terminated string, as it may contain embedded zero-valued 
   octets.  There are no restrictions on alignment (e.g., neither 
   sig_offset nor the length are required to be multiples of 4).  The 
   sig_offset represents an offset from the start of a signature block 
   (defined below). 

   The pNFS client block layout driver uses this volume identification 
   to map pnfs_block_volume_type4 VOLUME_SIMPLE deviceid4s to its local 
   view of a LUN. 

2.2.2. Volume Topology 

   The pNFS block server volume topology is expressed as an arbitrary 
   combination of base volume types enumerated in the following data 
   structures. 

   enum pnfs_block_volume_type4 { 

      VOLUME_SIMPLE = 0,      /* volume maps to a single LU */ 

      VOLUME_SLICE  = 1,      /* volume is a slice of another volume */ 

      VOLUME_CONCAT = 2,      /* volume is a concatenation of multiple 
                                 volumes */ 

      VOLUME_STRIPE = 3       /* volume is striped across multiple 
                                 volumes */ 

   }; 

   struct pnfs_block_simple_volume_info4 { 

      deviceid4         vol_id;     /* this volume id */ 

      int64_t           sig_offset;    /* offset in 512 octet blocks  
 
 
Black                    Expires August 2007                   [Page 5] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

                                       from start of volume if positive 

                                       from end of volume if negative 

      pnfs_block_sig_component4  ds<MAX_SIG_COMP>;                    
                                             /* disk signature */ 

   }; 

    

   struct pnfs_block_slice_volume_info4 { 

      deviceid4        vol_id;  /* this volume id */ 

      offset4          start;   /* offset of the start of the 
                                    slice in 512 octet blocks */ 

      length4          length;  /* length of slice in 512 octet blocks 
                                    */ 

      deviceid4        volume;  /* volume which is sliced */ 

   }; 

    

   struct pnfs_block_concat_volume_info4 { 

      deviceid4         vol_id;     /* this volume id */ 

      deviceid4         volumes<>;  /* volumes which are concatenated */ 

   }; 

    

   struct pnfs_block_stripe_volume_info4 { 

      deviceid4         vol_id;        /* this volume id */ 

      length4           stripe_unit;   /* size of stripe in 512 octect 
                                         blocks */ 

      deviceid4         volumes<>;     /* volumes which are striped 
                                         across -- MUST be same size */ 

 
 
Black                    Expires August 2007                   [Page 6] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   }; 

    

   union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 

         case VOLUME_SIMPLE: 

               pnfs_block_simple_volume_info4 simple_info; 

         case VOLUME_SLICE: 

               pnfs_block_slice_volume_info4 slice_info; 

         case VOLUME_CONCAT: 

               pnfs_block_concat_volume_info4 concat_info; 

         case VOLUME_STRIPE: 

               pnfs_block_stripe_volume_info4 stripe_info; 

   }; 

    

   struct pnfs_block_deviceaddr4 { 

      pnfs_block_volume4   volumes<>;  /* array of volumes */ 

   }; 

    

   The "pnfs_block_deviceaddr4" data structure is a structure that 
   allows arbitrarily complex nested volume structures to be encoded.  
   The types of aggregations that are allowed are stripes, 
   concatenations, and slices. Note that the volume topology expressed 
   in the pnfs_block_deviceaddr4 data structure will always resolve to a 
   set of pnfs_block_volume_type4 VOLUME_SIMPLE.  The array of volumes 
   is ordered such that the root volume is the last element of the 
   array.  Concat, slice and stripe volumes MUST refer to volumes 
   defined by lower indexed elements of the array. 

   The "pnfs_block_device_addr4" data structure is returned by the 
   server as the storage-protocol-specific opaque field da_addr_body in 
   the "device_addr4" structure by successful GETDEVICELIST and 
 
 
Black                    Expires August 2007                   [Page 7] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   GETDEVICEINFO operations. [NFSV4.1].  Typically the server in 
   response to a GETDEVICELIST request will return a single 
   "devlist_item4" in the gdlr_devinfo_list array.  This is because the 
   "opaque da_addr_body" field inside the "device_addr4" encodes the 
   entire volume hierarchy.  In the case of copy-on-write file systems, 
   the "gdlr_devinfo_list" array will contain two devices_item4s, one 
   describing the read-only volume hierarchy, and one describing the 
   writable volume hierarchy. 

   As noted above, all device_addr4 structures eventually resolve to a 
   set of volumes of type "pnfs_block_volume_type4 VOLUME_SIMPLE"  These 
   volumes are each uniquely identified by a set of signature components 
   located within respective signature blocks.  Each VOLUME_SIMPLE 
   volume specifies the location of its signature block in terms of 512 
   octet blocks.  The "int64_t sig_offset" is a signed quantity which 
   when positive represents an offset from the start of the volume, and 
   when negative represents an offset from the end of the volume. 

   Negative offsets are permitted in order to simplify the client 
   implementation on systems where the device label is found at a fixed 
   offset from the end of the volume. If the server uses negative 
   offsets to describe the signature, then the client and server MUST 
   NOT see different volume sizes.  Negative offsets SHOULD NOT be used 
   in systems that dynamically resize volumes unless care is taken to 
   ensure that the device label is always present at the offset from the 
   end of the volume as seen by the clients. 

2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 

   The "deviceid4 dli_id" returned in the devlist_item4 of a successful 
   GETDEVICELIST operation is a shorthand id used to reference the whole 
   volume topology. Decoding the "pnfs_block_deviceaddr4" results in a 
   flat ordering of 512 octet data blocks mapped to VOLUME_SIMPLE 
   deviceid4s. Combined with the deviceid4 mapping to a client LUN 
   described in 2.2.1 Volume Identification, a logical volume offset  
   can be mapped to a 512 block on a pNFS client LUN. [NFSV4.1]  With 
   the exception of the root volume id, the device ids returned in the 
   volumes array of a pnfs_block_deviceaddr4 data structure should not 
   be passed as arguments in a GETDEVICEINFO request.  These non-root 
   volume device ids are never returned by LAYOUTGET in the 
   "pnfs_block_layout4 vol_id" field.  If a non-root device id is passed 
   as an argument in a GETDEVICEINFO request, the server SHOULD return 
   NFS4ERR_INVAL. 




 
 
Black                    Expires August 2007                   [Page 8] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

2.3. Data Structures: Extents and Extent Lists 

   A pNFS block layout is a list of extents within a flat array of 512-
   octet data blocks in a logical volume.  The details of the volume 
   topology can be determined by using the GETDEVICEINFO or 
   GETDEVICELIST operation (see discussion of volume identification, 
   section 2.2 above).  The block layout describes the individual block 
   extents on the volume that make up the file.  The offsets and length 
   contained in an extent are specified in units of octets. 

   enum pnfs_block_extent_state4 { 

     READ_WRITE_DATA  = 0, /* the data located by this extent is valid 
                              for reading and writing. */ 

     READ_DATA = 1,        /* the data located by this extent is valid 
                              for reading only; it may not be written. 
                              */ 

     INVALID_DATA = 2,     /* the location is valid; the data is 
                              invalid. It is a newly (pre-) allocated 
                              extent. There is physical space on the 
                              volume. */ 

     NONE_DATA = 3         /* the location is invalid. It is a hole in 
                              the file. There is no physical space on 
                              the volume. */ 

   }; 

   struct pnfs_block_extent4 { 

     offset4         file_offset;     /* the starting octet offset in 
                                         the file */ 

     length4         extent_length;   /* the size in octets of the 
                                         extent */ 

     offset4         storage_offset;  /* the starting octet offset in 
                                         the volume */ 

     pnfs_block_extent_state4 es;     /* the state of this extent */ 

   }; 

    

 
 
Black                    Expires August 2007                   [Page 9] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   struct pnfs_block_layout4 { 

      deviceid4          vol_id;       /* id of logical volume on which 
                                         file is stored. */ 

      pnfs_block_extent4 extents<>;    /* extents which make up this 
                                         layout. */ 

   }; 

   The block layout consists of a deviceid4, shorthand for the whole 
   topology of the logical volume on which the file is stored, followed 
   by a list of extents which map the logical regions of the file to 
   physical locations on the volume.  The "storage offset" field within 
   each extent identifies a location on the logical volume described by 
   the "volume" field in the layout.  The client is responsible for 
   translating this logical offset into an offset on the appropriate 
   underlying SAN logical unit.    

   Each extent maps a logical region of the file onto a portion of the 
   specified logical volume.  The file_offset, extent_length, and es 
   fields for an extent returned from the server are always valid. The 
   interpretation of the storage_offset field depends on the value of es 
   as follows (in increasing order):  

   o  READ_WRITE_DATA means that storage_offset is valid, and points to 
      valid/initialized data that can be read and written. 

   o  READ_DATA means that storage_offset is valid and points to valid/ 
      initialized data which can only be read.  Write operations are 
      prohibited; the client may need to request a read-write layout. 

   o  INVALID_DATA means that storage_offset is valid, but points to 
      invalid un-initialized data. This data must not be physically read 
      from the disk until it has been initialized.  A read request for 
      an INVALID_DATA extent must fill the user buffer with zeros. Write 
      requests must write whole server-sized blocks to the disk; octets 
      not initialized by the user must be set to zero.  Any write to 
      storage in an INVALID_DATA extent changes the written portion of 
      the extent to READ_WRITE_DATA; the pNFS client is responsible for 
      reporting this change via LAYOUTCOMMIT. 

   o  NONE_DATA means that storage_offset is not valid, and this extent 
      may not be used to satisfy write requests. Read requests may be 
      satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents 
      may be returned by requests for readable extents; they are never 
      returned if the request was for a writeable extent. 
 
 
Black                    Expires August 2007                  [Page 10] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   An extent list lists all relevant extents in increasing order of the 
   file_offset of each extent; any ties are broken by increasing order 
   of the extent state (es). 

2.3.1. Layout Requests and Extent Lists 

   Each request for a layout specifies at least three parameters: file 
   offset, desired size, and minimum size.  If the status of a request 
   indicates success, the extent list returned must meet the following 
   criteria:  

   o  A request for a readable (but not writeable) layout returns only 
      READ_DATA or NONE_DATA extents (but not INVALID_DATA or 
      READ_WRITE_DATA extents). 

   o  A request for a writeable layout returns READ_WRITE_DATA or 
      INVALID_DATA extents (but not NONE_DATA extents).  It may also 
      return READ_DATA extents only when the offset ranges in those 
      extents are also covered by INVALID_DATA extents to permit writes.  

   o  The first extent in the list MUST contain the starting offset. 

   o  The total size of extents in the extent list MUST cover at least 
      the minimum size and no more than the desired size.  One exception 
      is allowed: the total size MAY be smaller if only readable extents 
      were requested and EOF is encountered. 

   o  Extents in the extent list MUST be logically contiguous for a 
      read-only layout.  For a read-write layout, the set of writable 
      extents (i.e., excluding READ_DATA extents) MUST be logically 
      contiguous.  Every READ_DATA extent in a read-write layout MUST be 
      covered by an INVALID_DATA extent.  This overlap of READ_DATA and 
      INVALID_DATA extents is the only permitted extent overlap. 

   o  Extents MUST be ordered in the list by starting offset, with 
      READ_DATA extents preceding INVALID_DATA extents in the case of 
      equal file_offsets. 

2.3.2. Layout Commits 

   struct pnfs_block_layoutupdate4 { 

      pnfs_block_extent4 commit_list<>;/* list of extents to which now 
                                         contain valid data. */ 



 
 
Black                    Expires August 2007                  [Page 11] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

         bool               make_version; /* client requests server to
                                           create copy-on-write image of
                                           this file. */ 

   }; 

   The "pnfs_block_layoutupdate4" structure is used by the client as the 
   block-protocol specific argument in a LAYOUTCOMMIT operation.  The 
   "commit_list" field is an extent list covering regions of the file 
   layout that were previously in the INVALID_DATA state, but have been 
   written by the client and should now be considered in the 
   READ_WRITE_DATA state.  The es field of each extent in the 
   commit_list MUST be set to READ_WRITE_DATA.  Implementers should be 
   aware that a server may be unable to commit regions at a granularity 
   smaller than a file-system block (typically 4KB or 8KB).  As noted 
   above, the block-size that the server uses is available as an NFSv4 
   attribute, and any extents included in the "commit_list" MUST be 
   aligned to this granularity and have a size that is a multiple of 
   this granularity.  If the client believes that its actions have moved 
   the end-of-file into the middle of a block being committed, the 
   client MUST write zeroes from the end-of-file to the end of that 
   block before committing the block.  Failure to do so may result in 
   junk (uninitialized data) appearing in that area if the file is 
   subsequently extended by moving the end-of-file. 

   The "make_version" field of the structure is a flag that the client 
   may set to request that the server create a copy-on-write image of 
   the file (pNFS clients may be involved in this operation - see 
   section 2.2.4, below).  In anticipation of this operation the client 
   which sets the "make_version" flag in the LAYOUTCOMMIT operation 
   should immediately mark all extents in the layout that is possesses 
   as state READ_DATA.  Future writes to the file require a new 
   LAYOUTGET operation to the server with an "iomode" set to 
   LAYOUTIOMODE_RW. 

2.3.3. Layout Returns 

   struct pnfs_block_layoutreturn4 { 

      pnfs_block_extent4 rel_list<>;   /* list of extents the client 
                                         will no longer use. */ 

   }; 

   The "rel_list" field is an extent list covering regions of the file 
   layout that are no longer needed by the client.  Including extents in 
   the "rel_list" for a LAYOUTRETURN operation represents an explicit 
 
 
Black                    Expires August 2007                  [Page 12] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   release of resources by the client, usually done for the purpose of 
   avoiding unnecessary CB_LAYOUTRECALL operations in the future. 

   Note that the block/volume layout supports unilateral layout 
   revocation. When a layout is unilaterally revoked by the server, 
   usually due to the client's lease timer expiring or the client 
   failing to return a layout in a timely manner, it is important for 
   the sake of correctness that any in-flight I/Os that the client 
   issued before the layout was revoked are rejected at the storage.  
   For the block/volume protocol, this is possible by fencing a client 
   with an expired layout timer from the physical storage.  Note, 
   however, that the granularity of this operation can only be at the 
   host/logical-unit level.  Thus, if one of a client's layouts is 
   unilaterally revoked by the server, it will effectively render 
   useless *all* of the client's layouts for files located on the 
   storage units comprising the logical volume.  This may render useless 
   the client's layouts for files in other filesystems. 

2.3.4. Client Copy-on-Write Processing 

   Distinguishing the READ_WRITE_DATA and READ_DATA extent types in 
   combination with the allowed overlap of READ_DATA extents with 
   INVALID_DATA extents allows copy-on-write processing to be done by 
   pNFS clients. In classic NFS, this operation would be done by the 
   server.  Since pNFS enables clients to do direct block access, it is 
   useful for clients to participate in copy-on-write operations.  All 
   block/volume pNFS clients MUST support this copy-on-write processing. 

   When a client wishes to write data covered by a READ_DATA extent, it 
   MUST have requested a writable layout from the server; that layout 
   will contain INVALID_DATA extents to cover all the data ranges of 
   that layout's READ_DATA extents. More precisely, for any file_offset 
   range covered by one or more READ_DATA extents in a writable layout, 
   the server MUST include one or more INVALID_DATA extents in the 
   layout that cover the same file_offset range. When performing a write 
   to such an area of a layout, the client MUST effectively copy the 
   data from the READ_DATA extent for any partial blocks of file_offset 
   and range, merge in the changes to be written, and write the result 
   to the INVALID_DATA extent for the blocks for that file_offset and 
   range. That is, if entire blocks of data are to be overwritten by an 
   operation, the corresponding READ_DATA blocks need not be fetched, 
   but any partial-block writes must be merged with data fetched via 
   READ_DATA extents before storing the result via INVALID_DATA extents.  
   For the purposes of this discussion, "entire blocks" and "partial 
   blocks" refer to the server's file-system block size.  Storing of 
   data in an INVALID_DATA extent converts the written portion of the 
   INVALID_DATA extent to a READ_WRITE_DATA extent; all subsequent reads 
 
 
Black                    Expires August 2007                  [Page 13] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   MUST be performed from this extent; the corresponding portion of the 
   READ_DATA extent MUST NOT be used after storing data in an 
   INVALID_DATA extent. 

   In the LAYOUTCOMMIT operation that normally sends updated layout 
   information back to the server, for writable data, some INVALID_DATA 
   extents may be committed as READ_WRITE_DATA extents, signifying that 
   the storage at the corresponding storage_offset values has been 
   stored into and is now to be considered as valid data to be read. 
   READ_DATA extents are not committed to the server. For extents that 
   the client receives via LAYOUTGET as INVALID_DATA and returns via 
   LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the 
   READ_DATA mapping for that extent is no longer valid or necessary for 
   that file. 

2.3.5. Extents are Permissions 

   Layout extents returned to pNFS clients grant permission to read or 
   write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as 
   zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, 
   (INVALID_DATA reads as zeros, any write converts it to 
   READ_WRITE_DATA).  This is the only client means of obtaining 
   permission to perform direct I/O to storage devices; a pNFS client 
   MUST NOT perform direct I/O operations that are not permitted by an 
   extent held by the client.  Client adherence to this rule places the 
   pNFS server in control of potentially conflicting storage device 
   operations, enabling the server to determine what does conflict and 
   how to avoid conflicts by granting and recalling extents to/from 
   clients.   

   Block/volume class storage devices are not required to perform read 
   and write operations atomically.  Overlapping concurrent read and 
   write operations to the same data may cause the read to return a 
   mixture of before-write and after-write data.  Overlapping write 
   operations can be worse, as the result could be a mixture of data 
   from the two write operations; data corruption can occur if the 
   underlying storage is striped and the operations complete in 
   different orders on different stripes.  A pNFS server can avoid these 
   conflicts by implementing a single writer XOR multiple readers 
   concurrency control policy when there are multiple clients who wish 
   to access the same data.  This policy SHOULD be implemented when 
   storage devices do not provide atomicity for concurrent read/write 
   and write/write operations to the same data. 

   If a client makes a layout request that conflicts with an existing 
   layout delegation, the request will be rejected with the error 
   NFS4ERR_LAYOUTTRYLATER.  This client is then expected to retry the 
 
 
Black                    Expires August 2007                  [Page 14] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   request after a short interval.  During this interval the server 
   SHOULD recall the conflicting portion of the layout delegation from 
   the client that currently holds it.  This reject-and-retry approach 
   does not prevent client starvation when there is contention for the 
   layout of a particular file.  For this reason a pNFS server SHOULD 
   implement a mechanism to prevent starvation.  One possibility is that 
   the server can maintain a queue of rejected layout requests.  Each 
   new layout request can be checked to see if it conflicts with a 
   previous rejected request, and if so, the newer request can be 
   rejected. Once the original requesting client retries its request, 
   its entry in the rejected request queue can be cleared, or the entry 
   in the rejected request queue can be removed when it reaches a 
   certain age. 

   NFSv4 supports mandatory locks and share reservations.  These are 
   mechanisms that clients can use to restrict the set of I/O operations 
   that are permissible to other clients.  Since all I/O operations 
   ultimately arrive at the NFSv4 server for processing, the server is 
   in a position to enforce these restrictions.  However, with pNFS 
   layout delegations, I/Os will be issued from the clients that hold 
   the delegations directly to the storage devices that host the data.  
   These devices have no knowledge of files, mandatory locks, or share 
   reservations, and are not in a position to enforce such restrictions.  
   For this reason the NFSv4 server MUST NOT grant layout delegations 
   that conflict with mandatory locks or share reservations.  Further, 
   if a conflicting mandatory lock request or a conflicting open request 
   arrives at the server, the server MUST recall the part of the layout 
   delegation in conflict with the request before granting the request. 

2.3.6. End-of-file Processing 

   The end-of-file location can be changed in two ways: implicitly as 
   the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 
   or explicitly as the result of a SETATTR request.  Typically, when a 
   file is truncated by an NFSv4 client via the SETATTR call, the server 
   frees any disk blocks belonging to the file which are beyond the new 
   end-of-file octet, and may write zeros to the portion of the new end-
   of-file block beyond the new end-of-file octet.  These actions render 
   any pNFS layouts which refer to the blocks that are freed or written 
   semantically invalid.  Therefore, the server MUST recall from clients 
   the portions of any pNFS layouts which refer to blocks that will be 
   freed or written by the server before processing the truncate 
   request. These recalls may take time to complete; as explained in 
   [NFSv4.1], if the server cannot respond to the client SETATTR request 
   in a reasonable amount of time, it SHOULD reply to the client with 
   the error NFS4ERR_DELAY. 

 
 
Black                    Expires August 2007                  [Page 15] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   Blocks in the INVALID_DATA state which lie beyond the new end-of-file 
   block present a special case.  The server has reserved these blocks 
   for use by a pNFS client with a writable layout for the file, but the 
   client has yet to commit the blocks, and they are not yet a part of 
   the file mapping on disk.  The server MAY free these blocks while 
   processing the SETATTR request.  If so, the server MUST recall any 
   layouts from pNFS clients which refer to the blocks before processing 
   the truncate.  If the server does not free the INVALID_DATA blocks 
   while processing the SETATTR request, it need not recall layouts 
   which refer only to the INVALID DATA blocks. 

   When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 
   the current end-of-file, or extended explicitly by a SETATTR request, 
   the server need not recall any portions of any pNFS layouts. 

2.3.7. Client Fencing 

   The pNFS block protocol must handle situations in which a system 
   failure, typically a network connectivity issue, requires the server 
   to unilaterally revoke extents from one client in order to transfer 
   the extents to another client.  The pNFS server implementation MUST 
   ensure that when resources are transferred to another client, they 
   are not used by the client originally owning them, and this must be 
   ensured against any possible combination of partitions and delays 
   among all of the participants to the protocol (server, storage and 
   client).  Several approaches to guaranteeing this isolation are 
   possible and are discussed below. 

   One server based implementation choice for fencing is to use the 
   STOMITH (Shoot The Other Machine In The Head) protocol, i.e., turn 
   off the power to the client machine that needs to be isolated.  This 
   is possible if the server has access to either an IPMI interface to 
   power cycle the client, or an alternate method of turning off power 
   to a non-communicative client.  The client SHOULD be kept powered off 
   for at least the duration of the server lease time, as it is 
   possible, although untypical, that the client caches the layout 
   information on persistent storage.  This approach can in some 
   instances guarantee that the rogue client no longer is capable of 
   accessing the storage.  However, in other situations, for example 
   lack of TCP/IP access to the client's IPMI network address, this 
   approach cannot guarantee anything.   

   Another implementation choice for fencing the block client from the 
   block storage is the use of LUN (Logical Unit Number) masking or 
   mapping at the storage systems or storage area network to disable 
   access by the client to be isolated.  In contrast to the STOMITH 
   approach, this requires server access to a management interface for 
 
 
Black                    Expires August 2007                  [Page 16] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   the storage system and authorization to perform LUN masking and 
   management operations.  For example, SMI-S [SMIS] provides a means to 
   discover and mask LUNs, including a means of associating clients with 
   the necessary World Wide Names or Initiator names to be masked. 

   In the absence of support for other mechanisms, the server MUST 
   choose to rely on the clients to implement a timed lease I/O fencing 
   mechanism.  Because clients do not know if the server is using 
   STOMITH or LUN masking. in all cases the client MUST implement timed 
   lease fencing.  In timed lease fencing we define two time periods, 
   the first, "lease_time" is the length of a lease as defined by the 
   server's lease_time attribute (see Section 5.4 of [NFSV4.1]), and the 
   second, "maximum_io_time" is the maximum time it can take for a 
   client I/O to the storage system to either complete or fail; this 
   value is often 30 seconds or 60 seconds, but may be longer in some 
   environments.  If the maximum client I/O time cannot be bounded, this 
   timed lease mechanism MUST NOT be used.  The client can use GETATTR 
   to query the server's default setting of "maximum_io_time".  The 
   server must respond with the maximum I/O time in seconds.  If the 
   client's maximum I/O time is greater than the server's default, then 
   the client MUST use SETATTR to inform the server of its maximum_I/O 
   time.  Using these two time span values, we specify the behavior of 
   the client and server as follows. 

   When a client receives layout information via a LAYOUTGET operation, 
   those layouts are valid for at most "lease_time" seconds from when 
   the server granted them.  A layout is renewed by any successful 
   SEQUEUNCE operation, or whenever a new stateid is created or updated 
   (see the section "Lease Renewal" of [NFSV4.1]).  If the layout lease 
   is not renewed prior to expiration, the client MUST cease to use the 
   layout after "lease_time" seconds from when it either sent the 
   original LAYOUTGET command, or sent the last operation renewing the 
   lease.  In other words, the client may not issue any I/O to blocks 
   specified by an expired layout.  In the presence of large 
   communication delays between the client and server it is even 
   possible for the lease to expire prior to the server response 
   arriving at the client.  In such a situation the client MUST NOT use 
   the expired layouts, and SHOULD revert to using standard NFSv41 READ 
   and WRITE operations.  Furthermore, the client must be configured 
   such that I/O operations complete within the "maximum_io_time" even 
   in the presence of multipathing drivers that will retry I/Os via 
   multiple paths.  If a client cannot guarantee a bounded maximum I/O 
   time, it MUST NOT use pNFS. 

   As stated in the section "Dealing with Lease Expiration on the 
   Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 
   sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 
 
 
Black                    Expires August 2007                  [Page 17] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 
   SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 
   cease to use all layouts and device id to device address mappings 
   associated with the corresponding server. 

   In the absence of known two way communication between the client and 
   the server on the fore channel, the server must wait for at least the 
   time period "lease_time" plus "maximum_io_time" before transferring 
   layouts from the original client to any other client.  The server, 
   like the client, must take a conservative approach, and start the 
   lease expiration timer from the time that it received the operation 
   which last renewed the lease. 

2.4. Crash Recovery Issues 

   When the server crashes while the client holds a writable layout, and 
   the client has written data to blocks covered by the layout, and the 
   blocks are still in the INVALID_DATA state, the client has two 
   options for recovery.  If the data that has been written to these 
   blocks is still cached by the client, the client can simply re-write 
   the data via NFSv4, once the server has come back online.  However, 
   if the data is no longer in the client's cache, the client MUST NOT 
   attempt to source the data from the data servers.  Instead, it should 
   attempt to commit the blocks in question to the server during the 
   server's recovery grace period, by sending a LAYOUTCOMMIT with the 
   "loca_reclaim" flag set to true. This process is described in detail 
   in [NFSv4.1] section 18.42.4. 

2.5. Recalling resources: CB_RECALL_ANY 

   The server may decide that it cannot hold all of the state for 
   layouts without running out of resources. In such a case, it is free 
   to recall individual layouts using CB_LAYOUTRECALL to reduce the 
   load, or it may choose to request that the client return any layout. 

   For the block layout we define the following bit 

   const RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS = 4 

   When the server sends a CB_RECALL_ANY request to a client specifying 
   the RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS bit in craa_type_mask, the 
   client should immediately respond with NFS4_OK, and then 
   asynchronously return complete file layouts until the number of files 
   with layouts cached on the client is less the craa_object_to_keep. 

   The block layout does not currently use bits 5, 6 or 7.  If any of 
   these bits are set, the client should return NFS4ERR_INVAL. 
 
 
Black                    Expires August 2007                  [Page 18] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

2.6. Transient and Permanent Errors 

   The server may respond to LAYOUTGET with a variety of error statuses. 
   These errors can convey transient conditions or more permanent 
   conditions that are unlikely to be resolved soon. 

   The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 
   used to indicate that the server cannot immediately grant the layout 
   to the client.  In the former case this is because the server has 
   recently issued a CB_LAYOUTRECALL to the requesting client, whereas 
   in the case of NFS4ERR_TRYLATER, the server cannot grant the request 
   possibly due to sharing conflicts with other clients.  In either 
   case, a reasonable approach for the client is to wait several 
   milliseconds and retry the request.  The client SHOULD track the 
   number of retries, and if forward progress is not made, the client 
   SHOULD send the READ or WRITE operation directly to the server. 

   The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 
   layouts are not supported for the requested file or its containing 
   file system.  The server may also return this error code if the 
   server is the progress of migrating the file from secondary storage, 
   or for any other reason which causes the server to be unable to 
   supply the layout.  As a result of receiving 
   NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 
   WRITE requests directly to the server.  It is expected that a client 
   will not cache the file's layoutunavailable state forever, particular 
   if the file is closed, and thus eventually, the client MAY reissue a 
   LAYOUTGET operation. 

3. Security Considerations 

   Typically, SAN disk arrays and SAN protocols provide access control 
   mechanisms (access-logics, lun masking, etc.) which operate at the 
   granularity of individual hosts.  The functionality provided by such 
   mechanisms makes it possible for the server to "fence" individual 
   client machines from certain physical disks---that is to say, to 
   prevent individual client machines from reading or writing to certain 
   physical disks.  Finer-grained access control methods are not 
   generally available.  For this reason, certain security 
   responsibilities are delegated to pNFS clients for block/volume 
   layouts.  Block/volume storage systems generally control access at a 
   volume granularity, and hence pNFS clients have to be trusted to only 
   perform accesses allowed by the layout extents they currently hold 
   (e.g., and not access storage for files on which a layout extent is 
   not held).  In general, the server will not be able to prevent a 
   client which holds a layout for a file from accessing parts of the 
   physical disk not covered by the layout.  Similarly, the server will 
 
 
Black                    Expires August 2007                  [Page 19] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   not be able to prevent a client from accessing blocks covered by a 
   layout that it has already returned.  This block-based level of 
   protection must be provided by the client software. 

   An alternative method of block/volume protocol use is for the storage 
   devices to export virtualized block addresses, which do reflect the 
   files to which blocks belong.  These virtual block addresses are 
   exported to pNFS clients via layouts.  This allows the storage device 
   to make appropriate access checks, while mapping virtual block 
   addresses to physical block addresses.  In environments where the 
   security requirements are such that client-side protection from 
   access to storage outside of the layout is not sufficient pNFS 
   block/volume storage layouts for pNFS SHOULD NOT be used, unless the 
   storage device is able to implement the appropriate access checks, 
   via use of virtualized block addresses, or other means. 

   This also has implications for some NFSv4 functionality outside pNFS.  
   For instance, if a file is covered by a mandatory read-only lock, the 
   server can ensure that only readable layouts for the file are granted 
   to pNFS clients.  However, it is up to each pNFS client to ensure 
   that the readable layout is used only to service read requests, and 
   not to allow writes to the existing parts of the file.  Since 
   block/volume storage systems are generally not capable of enforcing 
   such file-based security, in environments where pNFS clients cannot 
   be trusted to enforce such policies, pNFS block/volume storage 
   layouts SHOULD NOT be used. 

   Access to block/volume storage is logically at a lower layer of the 
   I/O stack than NFSv4, and hence NFSv4 security is not directly 
   applicable to protocols that access such storage directly.  Depending 
   on the protocol, some of the security mechanisms provided by NFSv4 
   (e.g., encryption, cryptographic integrity) may not be available, or 
   may be provided via different means.  At one extreme, pNFS with 
   block/volume storage can be used with storage access protocols (e.g., 
   parallel SCSI) that provide essentially no security functionality.  
   At the other extreme, pNFS may be used with storage protocols such as 
   iSCSI that provide significant functionality.  It is the 
   responsibility of those administering and deploying pNFS with a 
   block/volume storage access protocol to ensure that appropriate 
   protection is provided to that protocol (physical security is a 
   common means for protocols not based on IP).  In environments where 
   the security requirements for the storage protocol cannot be met, 
   pNFS block/volume storage layouts SHOULD NOT be used. 

   When security is available for a storage protocol, it is generally at 
   a different granularity and with a different notion of identity than 
   NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 
 
 
Black                    Expires August 2007                  [Page 20] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   initiator access to volumes).  The responsibility for enforcing 
   appropriate correspondences between these security layers is placed 
   upon the pNFS client.  As with the issues in the first paragraph of 
   this section, in environments where the security requirements are 
   such that client-side protection from access to storage outside of 
   the layout is not sufficient, pNFS block/volume storage layouts  
   SHOULD NOT be used. 

4. Conclusions 

   This draft specifies the block/volume layout type for pNFS and 
   associated functionality. 

5. IANA Considerations 

   There are no IANA considerations in this document.  All pNFS IANA 
   Considerations are covered in [NFSV4.1]. 

6. Revision History 

   -00: Initial Version as draft-black-pnfs-block-00 

   -01: Rework discussion of extents as locks to talk about extents 
   granting access permissions.  Rewrite operation ordering section to 
   discuss deadlocks and races that can cause problems.  Add new section 
   on recall completion.  Add client copy-on-write based on text from 
   Craig Everhart. 

   -02: Fix glitches in extent state descriptions.  Describe most issues 
   as RESOLVED.  Most of Section 3 has been incorporated into the the 
   main PNFD draft, add NOTE to that effect and say that it will be 
   deleted in the next version of this draft (which should be a draft-
   ietf-nfsv4 draft).  Cleaning up a number of things have been left to 
   that draft revision, including the interlocks with the types in the 
   main pNFS draft, layout striping support, and finishing the Security 
   Considerations section. 

   -00: New version as draft-ietf-nfsv4-pnfs-block.  Removed resolved 
   operations issues (Section 3).  Align types with main pNFS draft 
   (which is now part of the NFSv4.1 minor version draft), add volume 
   striping and slicing support.  New operations issues are in Section 3 
   - the need for a "reclaim bit" and EOF concerns are the two major 
   issues.  Extended and improved the Security Considerations section, 
   but it still needs work.  Added 1-sentence conclusion that also still 
   needs work. 


 
 
Black                    Expires August 2007                  [Page 21] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   -01: Changed definition of pnfs_block_deviceaddr4 union to allow more 
   concise representation of aggregated volume structures.  Fixed typos 
   to make both pnfs_block_layoutupdate and pnfs_block_layoutreturn 
   structures contain extent lists instead of a single extent.  Updated 
   section 2.1.6 to remove references to CB_SIZECHANGED. Moved 
   description of recovery from "Issues" section to "Block Layout 
   Description" section. Removed section 3.2 "End-of-file handling 
   issues".  Merged old "block/volume layout security considerations" 
   section from previous version of [NFSv4.1] with section 4.  Moved 
   paragraph on lingering writes to the section which describes layout 
   return.  Removed Issues section (3) as the remaining issues are all 
   resolved. 

   02: Changed pnfs_deviceaddr4 to deviceaddr4 to match [NFSv4.1].  
   Updated section 2.2.2 to clarify that the es fields must be 
   READ_WRITE_DATA in pnfs_block_layoutupdate requests.  Updated section 
   2.2.5 to specify that data corruption can occur; that requests, not 
   the client, are rejected; that server "SHOULD" recall conflicting 
   portions of layouts.  Clarified that unilateral revocation may affect 
   layouts from other filesystems.  Changed signature offset to be a 
   signed quantity to allow for labels at a fixed location from the end 
   of a volume.  Changed all data structures to have suffix "4", changed 
   extentState4 to pnfs_block_extent_state4 and sigComponent to 
   pnfs_block_sig_component4, to conform to [NFSv4.1]. 

   03: Moved sections GETDEVICELIST and GETDEVICEINFO earlier in 
   document for better readability.  Added pnfs_block_simple_volume4 
   data structure, and added volume_id fields to all pnfs_block volume 
   info data structures. 

   04: Added information about device ids to clarify their usage.  
   Described where the pnfs_block_deviceaddr4 data structure is found to 
   be accurate with draft-ietf-nfsv4-minorversion1-14.  Updated 
   references from -08 to -14.  Removed root_id from 
   pnfs_block_deviceaddr4.  Changed 'byte' to 'octet'.  Clarify the 
   block size and stripe size in volume data structures.  Rename 
   'volume' and 'id' to be 'vol_id' consistently.  Added sections on 
   CB_RECALL_ANY and fencing. 

7. Acknowledgments 

   This draft draws extensively on the authors' familiarity with the 
   mapping functionality and protocol in EMC's HighRoad system 
   [HighRoad].  The protocol used by HighRoad is called FMP (File 
   Mapping Protocol); it is an add-on protocol that runs in parallel 
   with filesystem protocols such as NFSv3 to provide pNFS-like 
   functionality for block/volume storage.  While drawing on HighRoad 
 
 
Black                    Expires August 2007                  [Page 22] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   FMP, the data structures and functional considerations in this draft 
   differ in significant ways, based on lessons learned and the 
   opportunity to take advantage of NFSv4 features such as COMPOUND 
   operations.  The design to support pNFS client participation in copy-
   on-write is based on text and ideas contributed by Craig Everhart 
   (formerly with IBM). 

8. References 

8.1. Normative References 

   [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 
             Requirement Levels", BCP 14, RFC 2119, March 1997. 

   [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 
             Version 1", draft-ietf-nfsv4-minorversion1-14.txt, Internet 
             Draft, July 2007. 

8.2. Informative References 

   [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white 
              paper, available at: 
   http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf 
              link checked 29 August 2006.       

   [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 
            version 1.0.2, available at: 
   http://www.snia.org/smi/tech_activities/smi_spec_pr/spec/SMIS_1_0_2_f
   inal.pdf 

Author's Addresses 

   David L. Black 
   EMC Corporation 
   176 South Street 
   Hopkinton, MA 01748 
       
   Phone: +1 (508) 293-7953 
   Email: black_david@emc.com 
    







 
 
Black                    Expires August 2007                  [Page 23] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   Stephen Fridella 
   EMC Corporation 
   228 South Street 
   Hopkinton, MA  01748 
       
   Phone: +1 (508) 249-3528 
   Email: fridella_stephen@emc.com 
    
   Jason Glasgow 
   EMC Corporation 
   32 Coslin Drive 
   Southboro, MA  01772 
       
   Phone: +1 (508) 305 8831 
   Email: glasgow_jason@emc.com 
    

Intellectual Property Statement 

   The IETF takes no position regarding the validity or scope of any 
   Intellectual Property Rights or other rights that might be claimed to 
   pertain to the implementation or use of the technology described in 
   this document or the extent to which any license under such rights 
   might or might not be available; nor does it represent that it has 
   made any independent effort to identify any such rights.  Information 
   on the procedures with respect to rights in RFC documents can be 
   found in BCP 78 and BCP 79. 

   Copies of IPR disclosures made to the IETF Secretariat and any 
   assurances of licenses to be made available, or the result of an 
   attempt made to obtain a general license or permission for the use of 
   such proprietary rights by implementers or users of this 
   specification can be obtained from the IETF on-line IPR repository at 
   http://www.ietf.org/ipr. 

   The IETF invites any interested party to bring to its attention any 
   copyrights, patents or patent applications, or other proprietary 
   rights that may cover technology that may be required to implement 
   this standard.  Please address the information to the IETF at ietf-
   ipr@ietf.org. 

Disclaimer of Validity 

   This document and the information contained herein are provided on an 
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 
   THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 
 
 
Black                    Expires August 2007                  [Page 24] 
    






Internet-Draft         pNFS Block/Volume Layout              March 2007 
    

   OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 
   THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 

Copyright Statement 

   Copyright (C) The IETF Trust (2007). 

   This document is subject to the rights, licenses and restrictions 
   contained in BCP 78, and except as set forth therein, the authors 
   retain all their rights. 

Acknowledgment 

   Funding for the RFC Editor function is currently provided by the 
   Internet Society. 

 





























 
 
Black                    Expires August 2007                  [Page 25]