ixgbe.txt 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350
  1. Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of
  2. Adapters
  3. =============================================================================
  4. Intel 10 Gigabit Linux driver.
  5. Copyright(c) 1999 - 2013 Intel Corporation.
  6. Contents
  7. ========
  8. - Identifying Your Adapter
  9. - Additional Configurations
  10. - Performance Tuning
  11. - Known Issues
  12. - Support
  13. Identifying Your Adapter
  14. ========================
  15. The driver in this release is compatible with 82598, 82599 and X540-based
  16. Intel Network Connections.
  17. For more information on how to identify your adapter, go to the Adapter &
  18. Driver ID Guide at:
  19. http://support.intel.com/support/network/sb/CS-012904.htm
  20. SFP+ Devices with Pluggable Optics
  21. ----------------------------------
  22. 82599-BASED ADAPTERS
  23. NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
  24. is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
  25. optics and/or the direct attach cables listed below.
  26. When 82599-based SFP+ devices are connected back to back, they should be set to
  27. the same Speed setting via ethtool. Results may vary if you mix speed settings.
  28. 82598-based adapters support all passive direct attach cables that comply
  29. with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  30. cables are not supported.
  31. Supplier Type Part Numbers
  32. SR Modules
  33. Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
  34. Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
  35. Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
  36. LR Modules
  37. Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
  38. Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
  39. Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
  40. The following is a list of 3rd party SFP+ modules and direct attach cables that
  41. have received some testing. Not all modules are applicable to all devices.
  42. Supplier Type Part Numbers
  43. Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  44. Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  45. Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
  46. Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
  47. Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
  48. Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
  49. Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
  50. Finistar 1000BASE-T SFP FCLF8522P2BTL
  51. Avago 1000BASE-T SFP ABCU-5710RZ
  52. 82599-based adapters support all passive and active limiting direct attach
  53. cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
  54. Laser turns off for SFP+ when device is down
  55. -------------------------------------------
  56. "ip link set down" turns off the laser for 82599-based SFP+ fiber adapters.
  57. "ip link set up" turns on the laser.
  58. 82598-BASED ADAPTERS
  59. NOTES for 82598-Based Adapters:
  60. - Intel(R) Network Adapters that support removable optical modules only support
  61. their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
  62. Express Module only supports SR optical modules). If you plug in a different
  63. type of module, the driver will not load.
  64. - Hot Swapping/hot plugging optical modules is not supported.
  65. - Only single speed, 10 gigabit modules are supported.
  66. - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
  67. types are not supported. Please see your system documentation for details.
  68. The following is a list of 3rd party SFP+ modules and direct attach cables that
  69. have received some testing. Not all modules are applicable to all devices.
  70. Supplier Type Part Numbers
  71. Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  72. Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  73. Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
  74. 82598-based adapters support all passive direct attach cables that comply
  75. with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  76. cables are not supported.
  77. Flow Control
  78. ------------
  79. Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
  80. receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
  81. frames are generated when the receive packet buffer crosses a predefined
  82. threshold. When rx is enabled, the transmit unit will halt for the time delay
  83. specified when a PAUSE frame is received.
  84. Flow Control is enabled by default. If you want to disable a flow control
  85. capable link partner, use ethtool:
  86. ethtool -A eth? autoneg off RX off TX off
  87. NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
  88. behavior is changed to off. Flow control in 1 gig mode on these devices can
  89. lead to Tx hangs.
  90. Intel(R) Ethernet Flow Director
  91. -------------------------------
  92. Supports advanced filters that direct receive packets by their flows to
  93. different queues. Enables tight control on routing a flow in the platform.
  94. Matches flows and CPU cores for flow affinity. Supports multiple parameters
  95. for flexible flow classification and load balancing.
  96. Flow director is enabled only if the kernel is multiple TX queue capable.
  97. An included script (set_irq_affinity.sh) automates setting the IRQ to CPU
  98. affinity.
  99. You can verify that the driver is using Flow Director by looking at the counter
  100. in ethtool: fdir_miss and fdir_match.
  101. Other ethtool Commands:
  102. To enable Flow Director
  103. ethtool -K ethX ntuple on
  104. To add a filter
  105. Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 10.0.128.23
  106. action 1
  107. To see the list of filters currently present:
  108. ethtool -u ethX
  109. Perfect Filter: Perfect filter is an interface to load the filter table that
  110. funnels all flow into queue_0 unless an alternative queue is specified using
  111. "action". In that case, any flow that matches the filter criteria will be
  112. directed to the appropriate queue.
  113. If the queue is defined as -1, filter will drop matching packets.
  114. To account for filter matches and misses, there are two stats in ethtool:
  115. fdir_match and fdir_miss. In addition, rx_queue_N_packets shows the number of
  116. packets processed by the Nth queue.
  117. NOTE: Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not
  118. compatible with Flow Director. IF Flow Director is enabled, these will be
  119. disabled.
  120. The following three parameters impact Flow Director.
  121. FdirMode
  122. --------
  123. Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode)
  124. Default Value: 1
  125. Flow Director filtering modes.
  126. FdirPballoc
  127. -----------
  128. Valid Range: 0-2 (0=64k, 1=128k, 2=256k)
  129. Default Value: 0
  130. Flow Director allocated packet buffer size.
  131. AtrSampleRate
  132. --------------
  133. Valid Range: 1-100
  134. Default Value: 20
  135. Software ATR Tx packet sample rate. For example, when set to 20, every 20th
  136. packet, looks to see if the packet will create a new flow.
  137. Node
  138. ----
  139. Valid Range: 0-n
  140. Default Value: 1 (off)
  141. 0 - n: where n is the number of NUMA nodes (i.e. 0 - 3) currently online in
  142. your system
  143. 1: turns this option off
  144. The Node parameter will allow you to pick which NUMA node you want to have
  145. the adapter allocate memory on.
  146. max_vfs
  147. -------
  148. Valid Range: 1-63
  149. Default Value: 0
  150. If the value is greater than 0 it will also force the VMDq parameter to be 1
  151. or more.
  152. This parameter adds support for SR-IOV. It causes the driver to spawn up to
  153. max_vfs worth of virtual function.
  154. Additional Configurations
  155. =========================
  156. Jumbo Frames
  157. ------------
  158. The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
  159. enabled by changing the MTU to a value larger than the default of 1500.
  160. The maximum value for the MTU is 16110. Use the ip command to
  161. increase the MTU size. For example:
  162. ip link set dev ethx mtu 9000
  163. The maximum MTU setting for Jumbo Frames is 9710. This value coincides
  164. with the maximum Jumbo Frames size of 9728.
  165. Generic Receive Offload, aka GRO
  166. --------------------------------
  167. The driver supports the in-kernel software implementation of GRO. GRO has
  168. shown that by coalescing Rx traffic into larger chunks of data, CPU
  169. utilization can be significantly reduced when under large Rx load. GRO is an
  170. evolution of the previously-used LRO interface. GRO is able to coalesce
  171. other protocols besides TCP. It's also safe to use with configurations that
  172. are problematic for LRO, namely bridging and iSCSI.
  173. Data Center Bridging, aka DCB
  174. -----------------------------
  175. DCB is a configuration Quality of Service implementation in hardware.
  176. It uses the VLAN priority tag (802.1p) to filter traffic. That means
  177. that there are 8 different priorities that traffic can be filtered into.
  178. It also enables priority flow control which can limit or eliminate the
  179. number of dropped packets during network stress. Bandwidth can be
  180. allocated to each of these priorities, which is enforced at the hardware
  181. level.
  182. To enable DCB support in ixgbe, you must enable the DCB netlink layer to
  183. allow the userspace tools (see below) to communicate with the driver.
  184. This can be found in the kernel configuration here:
  185. -> Networking support
  186. -> Networking options
  187. -> Data Center Bridging support
  188. Once this is selected, DCB support must be selected for ixgbe. This can
  189. be found here:
  190. -> Device Drivers
  191. -> Network device support (NETDEVICES [=y])
  192. -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
  193. -> Intel(R) 10GbE PCI Express adapters support
  194. -> Data Center Bridging (DCB) Support
  195. After these options are selected, you must rebuild your kernel and your
  196. modules.
  197. In order to use DCB, userspace tools must be downloaded and installed.
  198. The dcbd tools can be found at:
  199. http://e1000.sf.net
  200. Ethtool
  201. -------
  202. The driver utilizes the ethtool interface for driver configuration and
  203. diagnostics, as well as displaying statistical information. The latest
  204. ethtool version is required for this functionality.
  205. The latest release of ethtool can be found from
  206. http://ftp.kernel.org/pub/software/network/ethtool/
  207. FCoE
  208. ----
  209. This release of the ixgbe driver contains new code to enable users to use
  210. Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
  211. functionality that is supported by the 82598-based hardware. This code has
  212. no default effect on the regular driver operation, and configuring DCB and
  213. FCoE is outside the scope of this driver README. Refer to
  214. http://www.open-fcoe.org/ for FCoE project information and contact
  215. e1000-eedc@lists.sourceforge.net for DCB information.
  216. MAC and VLAN anti-spoofing feature
  217. ----------------------------------
  218. When a malicious driver attempts to send a spoofed packet, it is dropped by
  219. the hardware and not transmitted. An interrupt is sent to the PF driver
  220. notifying it of the spoof attempt.
  221. When a spoofed packet is detected the PF driver will send the following
  222. message to the system log (displayed by the "dmesg" command):
  223. Spoof event(s) detected on VF (n)
  224. Where n=the VF that attempted to do the spoofing.
  225. Performance Tuning
  226. ==================
  227. An excellent article on performance tuning can be found at:
  228. http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
  229. Known Issues
  230. ============
  231. Enabling SR-IOV in a 32-bit or 64-bit Microsoft* Windows* Server 2008/R2
  232. Guest OS using Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE
  233. controller under KVM
  234. ------------------------------------------------------------------------
  235. KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
  236. includes traditional PCIe devices, as well as SR-IOV-capable devices using
  237. Intel 82576-based and 82599-based controllers.
  238. While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
  239. to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
  240. known issue with Microsoft Windows Server 2008 VM that results in a "yellow
  241. bang" error. This problem is within the KVM VMM itself, not the Intel driver,
  242. or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
  243. model for the guests, and this older CPU model does not support MSI-X
  244. interrupts, which is a requirement for Intel SR-IOV.
  245. If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
  246. with KVM and a Microsoft Windows Server 2008 guest try the following
  247. workaround. The workaround is to tell KVM to emulate a different model of CPU
  248. when using qemu to create the KVM guest:
  249. "-cpu qemu64,model=13"
  250. Support
  251. =======
  252. For general information, go to the Intel support website at:
  253. http://support.intel.com
  254. or the Intel Wired Networking project hosted by Sourceforge at:
  255. http://e1000.sourceforge.net
  256. If an issue is identified with the released source code on the supported
  257. kernel with a supported adapter, email the specific information related
  258. to the issue to e1000-devel@lists.sf.net