scsi_fc_transport.txt 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497
  1. SCSI FC Tansport
  2. =============================================
  3. Date: 11/18/2008
  4. Kernel Revisions for features:
  5. rports : <<TBS>>
  6. vports : 2.6.22
  7. bsg support : 2.6.30 (?TBD?)
  8. Introduction
  9. ============
  10. This file documents the features and components of the SCSI FC Transport.
  11. It also provides documents the API between the transport and FC LLDDs.
  12. The FC transport can be found at:
  13. drivers/scsi/scsi_transport_fc.c
  14. include/scsi/scsi_transport_fc.h
  15. include/scsi/scsi_netlink_fc.h
  16. include/scsi/scsi_bsg_fc.h
  17. This file is found at Documentation/scsi/scsi_fc_transport.txt
  18. FC Remote Ports (rports)
  19. ========================================================================
  20. << To Be Supplied >>
  21. FC Virtual Ports (vports)
  22. ========================================================================
  23. Overview:
  24. -------------------------------
  25. New FC standards have defined mechanisms which allows for a single physical
  26. port to appear on as multiple communication ports. Using the N_Port Id
  27. Virtualization (NPIV) mechanism, a point-to-point connection to a Fabric
  28. can be assigned more than 1 N_Port_ID. Each N_Port_ID appears as a
  29. separate port to other endpoints on the fabric, even though it shares one
  30. physical link to the switch for communication. Each N_Port_ID can have a
  31. unique view of the fabric based on fabric zoning and array lun-masking
  32. (just like a normal non-NPIV adapter). Using the Virtual Fabric (VF)
  33. mechanism, adding a fabric header to each frame allows the port to
  34. interact with the Fabric Port to join multiple fabrics. The port will
  35. obtain an N_Port_ID on each fabric it joins. Each fabric will have its
  36. own unique view of endpoints and configuration parameters. NPIV may be
  37. used together with VF so that the port can obtain multiple N_Port_IDs
  38. on each virtual fabric.
  39. The FC transport is now recognizing a new object - a vport. A vport is
  40. an entity that has a world-wide unique World Wide Port Name (wwpn) and
  41. World Wide Node Name (wwnn). The transport also allows for the FC4's to
  42. be specified for the vport, with FCP_Initiator being the primary role
  43. expected. Once instantiated by one of the above methods, it will have a
  44. distinct N_Port_ID and view of fabric endpoints and storage entities.
  45. The fc_host associated with the physical adapter will export the ability
  46. to create vports. The transport will create the vport object within the
  47. Linux device tree, and instruct the fc_host's driver to instantiate the
  48. virtual port. Typically, the driver will create a new scsi_host instance
  49. on the vport, resulting in a unique <H,C,T,L> namespace for the vport.
  50. Thus, whether a FC port is based on a physical port or on a virtual port,
  51. each will appear as a unique scsi_host with its own target and lun space.
  52. Note: At this time, the transport is written to create only NPIV-based
  53. vports. However, consideration was given to VF-based vports and it
  54. should be a minor change to add support if needed. The remaining
  55. discussion will concentrate on NPIV.
  56. Note: World Wide Name assignment (and uniqueness guarantees) are left
  57. up to an administrative entity controlling the vport. For example,
  58. if vports are to be associated with virtual machines, a XEN mgmt
  59. utility would be responsible for creating wwpn/wwnn's for the vport,
  60. using its own naming authority and OUI. (Note: it already does this
  61. for virtual MAC addresses).
  62. Device Trees and Vport Objects:
  63. -------------------------------
  64. Today, the device tree typically contains the scsi_host object,
  65. with rports and scsi target objects underneath it. Currently the FC
  66. transport creates the vport object and places it under the scsi_host
  67. object corresponding to the physical adapter. The LLDD will allocate
  68. a new scsi_host for the vport and link its object under the vport.
  69. The remainder of the tree under the vports scsi_host is the same
  70. as the non-NPIV case. The transport is written currently to easily
  71. allow the parent of the vport to be something other than the scsi_host.
  72. This could be used in the future to link the object onto a vm-specific
  73. device tree. If the vport's parent is not the physical port's scsi_host,
  74. a symbolic link to the vport object will be placed in the physical
  75. port's scsi_host.
  76. Here's what to expect in the device tree :
  77. The typical Physical Port's Scsi_Host:
  78. /sys/devices/.../host17/
  79. and it has the typical descendant tree:
  80. /sys/devices/.../host17/rport-17:0-0/target17:0:0/17:0:0:0:
  81. and then the vport is created on the Physical Port:
  82. /sys/devices/.../host17/vport-17:0-0
  83. and the vport's Scsi_Host is then created:
  84. /sys/devices/.../host17/vport-17:0-0/host18
  85. and then the rest of the tree progresses, such as:
  86. /sys/devices/.../host17/vport-17:0-0/host18/rport-18:0-0/target18:0:0/18:0:0:0:
  87. Here's what to expect in the sysfs tree :
  88. scsi_hosts:
  89. /sys/class/scsi_host/host17 physical port's scsi_host
  90. /sys/class/scsi_host/host18 vport's scsi_host
  91. fc_hosts:
  92. /sys/class/fc_host/host17 physical port's fc_host
  93. /sys/class/fc_host/host18 vport's fc_host
  94. fc_vports:
  95. /sys/class/fc_vports/vport-17:0-0 the vport's fc_vport
  96. fc_rports:
  97. /sys/class/fc_remote_ports/rport-17:0-0 rport on the physical port
  98. /sys/class/fc_remote_ports/rport-18:0-0 rport on the vport
  99. Vport Attributes:
  100. -------------------------------
  101. The new fc_vport class object has the following attributes
  102. node_name: Read_Only
  103. The WWNN of the vport
  104. port_name: Read_Only
  105. The WWPN of the vport
  106. roles: Read_Only
  107. Indicates the FC4 roles enabled on the vport.
  108. symbolic_name: Read_Write
  109. A string, appended to the driver's symbolic port name string, which
  110. is registered with the switch to identify the vport. For example,
  111. a hypervisor could set this string to "Xen Domain 2 VM 5 Vport 2",
  112. and this set of identifiers can be seen on switch management screens
  113. to identify the port.
  114. vport_delete: Write_Only
  115. When written with a "1", will tear down the vport.
  116. vport_disable: Write_Only
  117. When written with a "1", will transition the vport to a disabled.
  118. state. The vport will still be instantiated with the Linux kernel,
  119. but it will not be active on the FC link.
  120. When written with a "0", will enable the vport.
  121. vport_last_state: Read_Only
  122. Indicates the previous state of the vport. See the section below on
  123. "Vport States".
  124. vport_state: Read_Only
  125. Indicates the state of the vport. See the section below on
  126. "Vport States".
  127. vport_type: Read_Only
  128. Reflects the FC mechanism used to create the virtual port.
  129. Only NPIV is supported currently.
  130. For the fc_host class object, the following attributes are added for vports:
  131. max_npiv_vports: Read_Only
  132. Indicates the maximum number of NPIV-based vports that the
  133. driver/adapter can support on the fc_host.
  134. npiv_vports_inuse: Read_Only
  135. Indicates how many NPIV-based vports have been instantiated on the
  136. fc_host.
  137. vport_create: Write_Only
  138. A "simple" create interface to instantiate a vport on an fc_host.
  139. A "<WWPN>:<WWNN>" string is written to the attribute. The transport
  140. then instantiates the vport object and calls the LLDD to create the
  141. vport with the role of FCP_Initiator. Each WWN is specified as 16
  142. hex characters and may *not* contain any prefixes (e.g. 0x, x, etc).
  143. vport_delete: Write_Only
  144. A "simple" delete interface to teardown a vport. A "<WWPN>:<WWNN>"
  145. string is written to the attribute. The transport will locate the
  146. vport on the fc_host with the same WWNs and tear it down. Each WWN
  147. is specified as 16 hex characters and may *not* contain any prefixes
  148. (e.g. 0x, x, etc).
  149. Vport States:
  150. -------------------------------
  151. Vport instantiation consists of two parts:
  152. - Creation with the kernel and LLDD. This means all transport and
  153. driver data structures are built up, and device objects created.
  154. This is equivalent to a driver "attach" on an adapter, which is
  155. independent of the adapter's link state.
  156. - Instantiation of the vport on the FC link via ELS traffic, etc.
  157. This is equivalent to a "link up" and successful link initialization.
  158. Further information can be found in the interfaces section below for
  159. Vport Creation.
  160. Once a vport has been instantiated with the kernel/LLDD, a vport state
  161. can be reported via the sysfs attribute. The following states exist:
  162. FC_VPORT_UNKNOWN - Unknown
  163. An temporary state, typically set only while the vport is being
  164. instantiated with the kernel and LLDD.
  165. FC_VPORT_ACTIVE - Active
  166. The vport has been successfully been created on the FC link.
  167. It is fully functional.
  168. FC_VPORT_DISABLED - Disabled
  169. The vport instantiated, but "disabled". The vport is not instantiated
  170. on the FC link. This is equivalent to a physical port with the
  171. link "down".
  172. FC_VPORT_LINKDOWN - Linkdown
  173. The vport is not operational as the physical link is not operational.
  174. FC_VPORT_INITIALIZING - Initializing
  175. The vport is in the process of instantiating on the FC link.
  176. The LLDD will set this state just prior to starting the ELS traffic
  177. to create the vport. This state will persist until the vport is
  178. successfully created (state becomes FC_VPORT_ACTIVE) or it fails
  179. (state is one of the values below). As this state is transitory,
  180. it will not be preserved in the "vport_last_state".
  181. FC_VPORT_NO_FABRIC_SUPP - No Fabric Support
  182. The vport is not operational. One of the following conditions were
  183. encountered:
  184. - The FC topology is not Point-to-Point
  185. - The FC port is not connected to an F_Port
  186. - The F_Port has indicated that NPIV is not supported.
  187. FC_VPORT_NO_FABRIC_RSCS - No Fabric Resources
  188. The vport is not operational. The Fabric failed FDISC with a status
  189. indicating that it does not have sufficient resources to complete
  190. the operation.
  191. FC_VPORT_FABRIC_LOGOUT - Fabric Logout
  192. The vport is not operational. The Fabric has LOGO'd the N_Port_ID
  193. associated with the vport.
  194. FC_VPORT_FABRIC_REJ_WWN - Fabric Rejected WWN
  195. The vport is not operational. The Fabric failed FDISC with a status
  196. indicating that the WWN's are not valid.
  197. FC_VPORT_FAILED - VPort Failed
  198. The vport is not operational. This is a catchall for all other
  199. error conditions.
  200. The following state table indicates the different state transitions:
  201. State Event New State
  202. --------------------------------------------------------------------
  203. n/a Initialization Unknown
  204. Unknown: Link Down Linkdown
  205. Link Up & Loop No Fabric Support
  206. Link Up & no Fabric No Fabric Support
  207. Link Up & FLOGI response No Fabric Support
  208. indicates no NPIV support
  209. Link Up & FDISC being sent Initializing
  210. Disable request Disable
  211. Linkdown: Link Up Unknown
  212. Initializing: FDISC ACC Active
  213. FDISC LS_RJT w/ no resources No Fabric Resources
  214. FDISC LS_RJT w/ invalid Fabric Rejected WWN
  215. pname or invalid nport_id
  216. FDISC LS_RJT failed for Vport Failed
  217. other reasons
  218. Link Down Linkdown
  219. Disable request Disable
  220. Disable: Enable request Unknown
  221. Active: LOGO received from fabric Fabric Logout
  222. Link Down Linkdown
  223. Disable request Disable
  224. Fabric Logout: Link still up Unknown
  225. The following 4 error states all have the same transitions:
  226. No Fabric Support:
  227. No Fabric Resources:
  228. Fabric Rejected WWN:
  229. Vport Failed:
  230. Disable request Disable
  231. Link goes down Linkdown
  232. Transport <-> LLDD Interfaces :
  233. -------------------------------
  234. Vport support by LLDD:
  235. The LLDD indicates support for vports by supplying a vport_create()
  236. function in the transport template. The presence of this function will
  237. cause the creation of the new attributes on the fc_host. As part of
  238. the physical port completing its initialization relative to the
  239. transport, it should set the max_npiv_vports attribute to indicate the
  240. maximum number of vports the driver and/or adapter supports.
  241. Vport Creation:
  242. The LLDD vport_create() syntax is:
  243. int vport_create(struct fc_vport *vport, bool disable)
  244. where:
  245. vport: Is the newly allocated vport object
  246. disable: If "true", the vport is to be created in a disabled stated.
  247. If "false", the vport is to be enabled upon creation.
  248. When a request is made to create a new vport (via sgio/netlink, or the
  249. vport_create fc_host attribute), the transport will validate that the LLDD
  250. can support another vport (e.g. max_npiv_vports > npiv_vports_inuse).
  251. If not, the create request will be failed. If space remains, the transport
  252. will increment the vport count, create the vport object, and then call the
  253. LLDD's vport_create() function with the newly allocated vport object.
  254. As mentioned above, vport creation is divided into two parts:
  255. - Creation with the kernel and LLDD. This means all transport and
  256. driver data structures are built up, and device objects created.
  257. This is equivalent to a driver "attach" on an adapter, which is
  258. independent of the adapter's link state.
  259. - Instantiation of the vport on the FC link via ELS traffic, etc.
  260. This is equivalent to a "link up" and successful link initialization.
  261. The LLDD's vport_create() function will not synchronously wait for both
  262. parts to be fully completed before returning. It must validate that the
  263. infrastructure exists to support NPIV, and complete the first part of
  264. vport creation (data structure build up) before returning. We do not
  265. hinge vport_create() on the link-side operation mainly because:
  266. - The link may be down. It is not a failure if it is. It simply
  267. means the vport is in an inoperable state until the link comes up.
  268. This is consistent with the link bouncing post vport creation.
  269. - The vport may be created in a disabled state.
  270. - This is consistent with a model where: the vport equates to a
  271. FC adapter. The vport_create is synonymous with driver attachment
  272. to the adapter, which is independent of link state.
  273. Note: special error codes have been defined to delineate infrastructure
  274. failure cases for quicker resolution.
  275. The expected behavior for the LLDD's vport_create() function is:
  276. - Validate Infrastructure:
  277. - If the driver or adapter cannot support another vport, whether
  278. due to improper firmware, (a lie about) max_npiv, or a lack of
  279. some other resource - return VPCERR_UNSUPPORTED.
  280. - If the driver validates the WWN's against those already active on
  281. the adapter and detects an overlap - return VPCERR_BAD_WWN.
  282. - If the driver detects the topology is loop, non-fabric, or the
  283. FLOGI did not support NPIV - return VPCERR_NO_FABRIC_SUPP.
  284. - Allocate data structures. If errors are encountered, such as out
  285. of memory conditions, return the respective negative Exxx error code.
  286. - If the role is FCP Initiator, the LLDD is to :
  287. - Call scsi_host_alloc() to allocate a scsi_host for the vport.
  288. - Call scsi_add_host(new_shost, &vport->dev) to start the scsi_host
  289. and bind it as a child of the vport device.
  290. - Initializes the fc_host attribute values.
  291. - Kick of further vport state transitions based on the disable flag and
  292. link state - and return success (zero).
  293. LLDD Implementers Notes:
  294. - It is suggested that there be a different fc_function_templates for
  295. the physical port and the virtual port. The physical port's template
  296. would have the vport_create, vport_delete, and vport_disable functions,
  297. while the vports would not.
  298. - It is suggested that there be different scsi_host_templates
  299. for the physical port and virtual port. Likely, there are driver
  300. attributes, embedded into the scsi_host_template, that are applicable
  301. for the physical port only (link speed, topology setting, etc). This
  302. ensures that the attributes are applicable to the respective scsi_host.
  303. Vport Disable/Enable:
  304. The LLDD vport_disable() syntax is:
  305. int vport_disable(struct fc_vport *vport, bool disable)
  306. where:
  307. vport: Is vport to be enabled or disabled
  308. disable: If "true", the vport is to be disabled.
  309. If "false", the vport is to be enabled.
  310. When a request is made to change the disabled state on a vport, the
  311. transport will validate the request against the existing vport state.
  312. If the request is to disable and the vport is already disabled, the
  313. request will fail. Similarly, if the request is to enable, and the
  314. vport is not in a disabled state, the request will fail. If the request
  315. is valid for the vport state, the transport will call the LLDD to
  316. change the vport's state.
  317. Within the LLDD, if a vport is disabled, it remains instantiated with
  318. the kernel and LLDD, but it is not active or visible on the FC link in
  319. any way. (see Vport Creation and the 2 part instantiation discussion).
  320. The vport will remain in this state until it is deleted or re-enabled.
  321. When enabling a vport, the LLDD reinstantiates the vport on the FC
  322. link - essentially restarting the LLDD statemachine (see Vport States
  323. above).
  324. Vport Deletion:
  325. The LLDD vport_delete() syntax is:
  326. int vport_delete(struct fc_vport *vport)
  327. where:
  328. vport: Is vport to delete
  329. When a request is made to delete a vport (via sgio/netlink, or via the
  330. fc_host or fc_vport vport_delete attributes), the transport will call
  331. the LLDD to terminate the vport on the FC link, and teardown all other
  332. datastructures and references. If the LLDD completes successfully,
  333. the transport will teardown the vport objects and complete the vport
  334. removal. If the LLDD delete request fails, the vport object will remain,
  335. but will be in an indeterminate state.
  336. Within the LLDD, the normal code paths for a scsi_host teardown should
  337. be followed. E.g. If the vport has a FCP Initiator role, the LLDD
  338. will call fc_remove_host() for the vports scsi_host, followed by
  339. scsi_remove_host() and scsi_host_put() for the vports scsi_host.
  340. Other:
  341. fc_host port_type attribute:
  342. There is a new fc_host port_type value - FC_PORTTYPE_NPIV. This value
  343. must be set on all vport-based fc_hosts. Normally, on a physical port,
  344. the port_type attribute would be set to NPORT, NLPORT, etc based on the
  345. topology type and existence of the fabric. As this is not applicable to
  346. a vport, it makes more sense to report the FC mechanism used to create
  347. the vport.
  348. Driver unload:
  349. FC drivers are required to call fc_remove_host() prior to calling
  350. scsi_remove_host(). This allows the fc_host to tear down all remote
  351. ports prior the scsi_host being torn down. The fc_remove_host() call
  352. was updated to remove all vports for the fc_host as well.
  353. Transport supplied functions
  354. ----------------------------
  355. The following functions are supplied by the FC-transport for use by LLDs.
  356. fc_vport_create - create a vport
  357. fc_vport_terminate - detach and remove a vport
  358. Details:
  359. /**
  360. * fc_vport_create - Admin App or LLDD requests creation of a vport
  361. * @shost: scsi host the virtual port is connected to.
  362. * @ids: The world wide names, FC4 port roles, etc for
  363. * the virtual port.
  364. *
  365. * Notes:
  366. * This routine assumes no locks are held on entry.
  367. */
  368. struct fc_vport *
  369. fc_vport_create(struct Scsi_Host *shost, struct fc_vport_identifiers *ids)
  370. /**
  371. * fc_vport_terminate - Admin App or LLDD requests termination of a vport
  372. * @vport: fc_vport to be terminated
  373. *
  374. * Calls the LLDD vport_delete() function, then deallocates and removes
  375. * the vport from the shost and object tree.
  376. *
  377. * Notes:
  378. * This routine assumes no locks are held on entry.
  379. */
  380. int
  381. fc_vport_terminate(struct fc_vport *vport)
  382. FC BSG support (CT & ELS passthru, and more)
  383. ========================================================================
  384. << To Be Supplied >>
  385. Credits
  386. =======
  387. The following people have contributed to this document:
  388. James Smart
  389. james.smart@emulex.com