Kconfig 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563
  1. #
  2. # Block device driver configuration
  3. #
  4. menuconfig MD
  5. bool "Multiple devices driver support (RAID and LVM)"
  6. depends on BLOCK
  7. select SRCU
  8. help
  9. Support multiple physical spindles through a single logical device.
  10. Required for RAID and logical volume management.
  11. if MD
  12. config BLK_DEV_MD
  13. tristate "RAID support"
  14. ---help---
  15. This driver lets you combine several hard disk partitions into one
  16. logical block device. This can be used to simply append one
  17. partition to another one or to combine several redundant hard disks
  18. into a RAID1/4/5 device so as to provide protection against hard
  19. disk failures. This is called "Software RAID" since the combining of
  20. the partitions is done by the kernel. "Hardware RAID" means that the
  21. combining is done by a dedicated controller; if you have such a
  22. controller, you do not need to say Y here.
  23. More information about Software RAID on Linux is contained in the
  24. Software RAID mini-HOWTO, available from
  25. <http://www.tldp.org/docs.html#howto>. There you will also learn
  26. where to get the supporting user space utilities raidtools.
  27. If unsure, say N.
  28. config MD_AUTODETECT
  29. bool "Autodetect RAID arrays during kernel boot"
  30. depends on BLK_DEV_MD=y
  31. default y
  32. ---help---
  33. If you say Y here, then the kernel will try to autodetect raid
  34. arrays as part of its boot process.
  35. If you don't use raid and say Y, this autodetection can cause
  36. a several-second delay in the boot time due to various
  37. synchronisation steps that are part of this step.
  38. If unsure, say Y.
  39. config MD_LINEAR
  40. tristate "Linear (append) mode"
  41. depends on BLK_DEV_MD
  42. ---help---
  43. If you say Y here, then your multiple devices driver will be able to
  44. use the so-called linear mode, i.e. it will combine the hard disk
  45. partitions by simply appending one to the other.
  46. To compile this as a module, choose M here: the module
  47. will be called linear.
  48. If unsure, say Y.
  49. config MD_RAID0
  50. tristate "RAID-0 (striping) mode"
  51. depends on BLK_DEV_MD
  52. ---help---
  53. If you say Y here, then your multiple devices driver will be able to
  54. use the so-called raid0 mode, i.e. it will combine the hard disk
  55. partitions into one logical device in such a fashion as to fill them
  56. up evenly, one chunk here and one chunk there. This will increase
  57. the throughput rate if the partitions reside on distinct disks.
  58. Information about Software RAID on Linux is contained in the
  59. Software-RAID mini-HOWTO, available from
  60. <http://www.tldp.org/docs.html#howto>. There you will also
  61. learn where to get the supporting user space utilities raidtools.
  62. To compile this as a module, choose M here: the module
  63. will be called raid0.
  64. If unsure, say Y.
  65. config MD_RAID1
  66. tristate "RAID-1 (mirroring) mode"
  67. depends on BLK_DEV_MD
  68. ---help---
  69. A RAID-1 set consists of several disk drives which are exact copies
  70. of each other. In the event of a mirror failure, the RAID driver
  71. will continue to use the operational mirrors in the set, providing
  72. an error free MD (multiple device) to the higher levels of the
  73. kernel. In a set with N drives, the available space is the capacity
  74. of a single drive, and the set protects against a failure of (N - 1)
  75. drives.
  76. Information about Software RAID on Linux is contained in the
  77. Software-RAID mini-HOWTO, available from
  78. <http://www.tldp.org/docs.html#howto>. There you will also
  79. learn where to get the supporting user space utilities raidtools.
  80. If you want to use such a RAID-1 set, say Y. To compile this code
  81. as a module, choose M here: the module will be called raid1.
  82. If unsure, say Y.
  83. config MD_RAID10
  84. tristate "RAID-10 (mirrored striping) mode"
  85. depends on BLK_DEV_MD
  86. ---help---
  87. RAID-10 provides a combination of striping (RAID-0) and
  88. mirroring (RAID-1) with easier configuration and more flexible
  89. layout.
  90. Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
  91. be the same size (or at least, only as much as the smallest device
  92. will be used).
  93. RAID-10 provides a variety of layouts that provide different levels
  94. of redundancy and performance.
  95. RAID-10 requires mdadm-1.7.0 or later, available at:
  96. https://www.kernel.org/pub/linux/utils/raid/mdadm/
  97. If unsure, say Y.
  98. config MD_RAID456
  99. tristate "RAID-4/RAID-5/RAID-6 mode"
  100. depends on BLK_DEV_MD
  101. select RAID6_PQ
  102. select LIBCRC32C
  103. select ASYNC_MEMCPY
  104. select ASYNC_XOR
  105. select ASYNC_PQ
  106. select ASYNC_RAID6_RECOV
  107. ---help---
  108. A RAID-5 set of N drives with a capacity of C MB per drive provides
  109. the capacity of C * (N - 1) MB, and protects against a failure
  110. of a single drive. For a given sector (row) number, (N - 1) drives
  111. contain data sectors, and one drive contains the parity protection.
  112. For a RAID-4 set, the parity blocks are present on a single drive,
  113. while a RAID-5 set distributes the parity across the drives in one
  114. of the available parity distribution methods.
  115. A RAID-6 set of N drives with a capacity of C MB per drive
  116. provides the capacity of C * (N - 2) MB, and protects
  117. against a failure of any two drives. For a given sector
  118. (row) number, (N - 2) drives contain data sectors, and two
  119. drives contains two independent redundancy syndromes. Like
  120. RAID-5, RAID-6 distributes the syndromes across the drives
  121. in one of the available parity distribution methods.
  122. Information about Software RAID on Linux is contained in the
  123. Software-RAID mini-HOWTO, available from
  124. <http://www.tldp.org/docs.html#howto>. There you will also
  125. learn where to get the supporting user space utilities raidtools.
  126. If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
  127. compile this code as a module, choose M here: the module
  128. will be called raid456.
  129. If unsure, say Y.
  130. config MD_MULTIPATH
  131. tristate "Multipath I/O support"
  132. depends on BLK_DEV_MD
  133. help
  134. MD_MULTIPATH provides a simple multi-path personality for use
  135. the MD framework. It is not under active development. New
  136. projects should consider using DM_MULTIPATH which has more
  137. features and more testing.
  138. If unsure, say N.
  139. config MD_FAULTY
  140. tristate "Faulty test module for MD"
  141. depends on BLK_DEV_MD
  142. help
  143. The "faulty" module allows for a block device that occasionally returns
  144. read or write errors. It is useful for testing.
  145. In unsure, say N.
  146. config MD_CLUSTER
  147. tristate "Cluster Support for MD"
  148. depends on BLK_DEV_MD
  149. depends on DLM
  150. default n
  151. ---help---
  152. Clustering support for MD devices. This enables locking and
  153. synchronization across multiple systems on the cluster, so all
  154. nodes in the cluster can access the MD devices simultaneously.
  155. This brings the redundancy (and uptime) of RAID levels across the
  156. nodes of the cluster. Currently, it can work with raid1 and raid10
  157. (limited support).
  158. If unsure, say N.
  159. source "drivers/md/bcache/Kconfig"
  160. config BLK_DEV_DM_BUILTIN
  161. bool
  162. config BLK_DEV_DM
  163. tristate "Device mapper support"
  164. select BLK_DEV_DM_BUILTIN
  165. depends on DAX || DAX=n
  166. ---help---
  167. Device-mapper is a low level volume manager. It works by allowing
  168. people to specify mappings for ranges of logical sectors. Various
  169. mapping types are available, in addition people may write their own
  170. modules containing custom mappings if they wish.
  171. Higher level volume managers such as LVM2 use this driver.
  172. To compile this as a module, choose M here: the module will be
  173. called dm-mod.
  174. If unsure, say N.
  175. config DM_MQ_DEFAULT
  176. bool "request-based DM: use blk-mq I/O path by default"
  177. depends on BLK_DEV_DM
  178. ---help---
  179. This option enables the blk-mq based I/O path for request-based
  180. DM devices by default. With the option the dm_mod.use_blk_mq
  181. module/boot option defaults to Y, without it to N, but it can
  182. still be overriden either way.
  183. If unsure say N.
  184. config DM_DEBUG
  185. bool "Device mapper debugging support"
  186. depends on BLK_DEV_DM
  187. ---help---
  188. Enable this for messages that may help debug device-mapper problems.
  189. If unsure, say N.
  190. config DM_BUFIO
  191. tristate
  192. depends on BLK_DEV_DM
  193. ---help---
  194. This interface allows you to do buffered I/O on a device and acts
  195. as a cache, holding recently-read blocks in memory and performing
  196. delayed writes.
  197. config DM_DEBUG_BLOCK_MANAGER_LOCKING
  198. bool "Block manager locking"
  199. depends on DM_BUFIO
  200. ---help---
  201. Block manager locking can catch various metadata corruption issues.
  202. If unsure, say N.
  203. config DM_DEBUG_BLOCK_STACK_TRACING
  204. bool "Keep stack trace of persistent data block lock holders"
  205. depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING
  206. select STACKTRACE
  207. ---help---
  208. Enable this for messages that may help debug problems with the
  209. block manager locking used by thin provisioning and caching.
  210. If unsure, say N.
  211. config DM_BIO_PRISON
  212. tristate
  213. depends on BLK_DEV_DM
  214. ---help---
  215. Some bio locking schemes used by other device-mapper targets
  216. including thin provisioning.
  217. source "drivers/md/persistent-data/Kconfig"
  218. config DM_UNSTRIPED
  219. tristate "Unstriped target"
  220. depends on BLK_DEV_DM
  221. ---help---
  222. Unstripes I/O so it is issued solely on a single drive in a HW
  223. RAID0 or dm-striped target.
  224. config DM_CRYPT
  225. tristate "Crypt target support"
  226. depends on BLK_DEV_DM
  227. select CRYPTO
  228. select CRYPTO_CBC
  229. ---help---
  230. This device-mapper target allows you to create a device that
  231. transparently encrypts the data on it. You'll need to activate
  232. the ciphers you're going to use in the cryptoapi configuration.
  233. For further information on dm-crypt and userspace tools see:
  234. <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
  235. To compile this code as a module, choose M here: the module will
  236. be called dm-crypt.
  237. If unsure, say N.
  238. config DM_SNAPSHOT
  239. tristate "Snapshot target"
  240. depends on BLK_DEV_DM
  241. select DM_BUFIO
  242. ---help---
  243. Allow volume managers to take writable snapshots of a device.
  244. config DM_THIN_PROVISIONING
  245. tristate "Thin provisioning target"
  246. depends on BLK_DEV_DM
  247. select DM_PERSISTENT_DATA
  248. select DM_BIO_PRISON
  249. ---help---
  250. Provides thin provisioning and snapshots that share a data store.
  251. config DM_CACHE
  252. tristate "Cache target (EXPERIMENTAL)"
  253. depends on BLK_DEV_DM
  254. default n
  255. select DM_PERSISTENT_DATA
  256. select DM_BIO_PRISON
  257. ---help---
  258. dm-cache attempts to improve performance of a block device by
  259. moving frequently used data to a smaller, higher performance
  260. device. Different 'policy' plugins can be used to change the
  261. algorithms used to select which blocks are promoted, demoted,
  262. cleaned etc. It supports writeback and writethrough modes.
  263. config DM_CACHE_SMQ
  264. tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
  265. depends on DM_CACHE
  266. default y
  267. ---help---
  268. A cache policy that uses a multiqueue ordered by recent hits
  269. to select which blocks should be promoted and demoted.
  270. This is meant to be a general purpose policy. It prioritises
  271. reads over writes. This SMQ policy (vs MQ) offers the promise
  272. of less memory utilization, improved performance and increased
  273. adaptability in the face of changing workloads.
  274. config DM_WRITECACHE
  275. tristate "Writecache target"
  276. depends on BLK_DEV_DM
  277. ---help---
  278. The writecache target caches writes on persistent memory or SSD.
  279. It is intended for databases or other programs that need extremely
  280. low commit latency.
  281. The writecache target doesn't cache reads because reads are supposed
  282. to be cached in standard RAM.
  283. config DM_ERA
  284. tristate "Era target (EXPERIMENTAL)"
  285. depends on BLK_DEV_DM
  286. default n
  287. select DM_PERSISTENT_DATA
  288. select DM_BIO_PRISON
  289. ---help---
  290. dm-era tracks which parts of a block device are written to
  291. over time. Useful for maintaining cache coherency when using
  292. vendor snapshots.
  293. config DM_MIRROR
  294. tristate "Mirror target"
  295. depends on BLK_DEV_DM
  296. ---help---
  297. Allow volume managers to mirror logical volumes, also
  298. needed for live data migration tools such as 'pvmove'.
  299. config DM_LOG_USERSPACE
  300. tristate "Mirror userspace logging"
  301. depends on DM_MIRROR && NET
  302. select CONNECTOR
  303. ---help---
  304. The userspace logging module provides a mechanism for
  305. relaying the dm-dirty-log API to userspace. Log designs
  306. which are more suited to userspace implementation (e.g.
  307. shared storage logs) or experimental logs can be implemented
  308. by leveraging this framework.
  309. config DM_RAID
  310. tristate "RAID 1/4/5/6/10 target"
  311. depends on BLK_DEV_DM
  312. select MD_RAID0
  313. select MD_RAID1
  314. select MD_RAID10
  315. select MD_RAID456
  316. select BLK_DEV_MD
  317. ---help---
  318. A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
  319. A RAID-5 set of N drives with a capacity of C MB per drive provides
  320. the capacity of C * (N - 1) MB, and protects against a failure
  321. of a single drive. For a given sector (row) number, (N - 1) drives
  322. contain data sectors, and one drive contains the parity protection.
  323. For a RAID-4 set, the parity blocks are present on a single drive,
  324. while a RAID-5 set distributes the parity across the drives in one
  325. of the available parity distribution methods.
  326. A RAID-6 set of N drives with a capacity of C MB per drive
  327. provides the capacity of C * (N - 2) MB, and protects
  328. against a failure of any two drives. For a given sector
  329. (row) number, (N - 2) drives contain data sectors, and two
  330. drives contains two independent redundancy syndromes. Like
  331. RAID-5, RAID-6 distributes the syndromes across the drives
  332. in one of the available parity distribution methods.
  333. config DM_ZERO
  334. tristate "Zero target"
  335. depends on BLK_DEV_DM
  336. ---help---
  337. A target that discards writes, and returns all zeroes for
  338. reads. Useful in some recovery situations.
  339. config DM_MULTIPATH
  340. tristate "Multipath target"
  341. depends on BLK_DEV_DM
  342. # nasty syntax but means make DM_MULTIPATH independent
  343. # of SCSI_DH if the latter isn't defined but if
  344. # it is, DM_MULTIPATH must depend on it. We get a build
  345. # error if SCSI_DH=m and DM_MULTIPATH=y
  346. depends on !SCSI_DH || SCSI
  347. ---help---
  348. Allow volume managers to support multipath hardware.
  349. config DM_MULTIPATH_QL
  350. tristate "I/O Path Selector based on the number of in-flight I/Os"
  351. depends on DM_MULTIPATH
  352. ---help---
  353. This path selector is a dynamic load balancer which selects
  354. the path with the least number of in-flight I/Os.
  355. If unsure, say N.
  356. config DM_MULTIPATH_ST
  357. tristate "I/O Path Selector based on the service time"
  358. depends on DM_MULTIPATH
  359. ---help---
  360. This path selector is a dynamic load balancer which selects
  361. the path expected to complete the incoming I/O in the shortest
  362. time.
  363. If unsure, say N.
  364. config DM_DELAY
  365. tristate "I/O delaying target"
  366. depends on BLK_DEV_DM
  367. ---help---
  368. A target that delays reads and/or writes and can send
  369. them to different devices. Useful for testing.
  370. If unsure, say N.
  371. config DM_UEVENT
  372. bool "DM uevents"
  373. depends on BLK_DEV_DM
  374. ---help---
  375. Generate udev events for DM events.
  376. config DM_FLAKEY
  377. tristate "Flakey target"
  378. depends on BLK_DEV_DM
  379. ---help---
  380. A target that intermittently fails I/O for debugging purposes.
  381. config DM_VERITY
  382. tristate "Verity target support"
  383. depends on BLK_DEV_DM
  384. select CRYPTO
  385. select CRYPTO_HASH
  386. select DM_BUFIO
  387. ---help---
  388. This device-mapper target creates a read-only device that
  389. transparently validates the data on one underlying device against
  390. a pre-generated tree of cryptographic checksums stored on a second
  391. device.
  392. You'll need to activate the digests you're going to use in the
  393. cryptoapi configuration.
  394. To compile this code as a module, choose M here: the module will
  395. be called dm-verity.
  396. If unsure, say N.
  397. config DM_VERITY_FEC
  398. bool "Verity forward error correction support"
  399. depends on DM_VERITY
  400. select REED_SOLOMON
  401. select REED_SOLOMON_DEC8
  402. ---help---
  403. Add forward error correction support to dm-verity. This option
  404. makes it possible to use pre-generated error correction data to
  405. recover from corrupted blocks.
  406. If unsure, say N.
  407. config DM_SWITCH
  408. tristate "Switch target support (EXPERIMENTAL)"
  409. depends on BLK_DEV_DM
  410. ---help---
  411. This device-mapper target creates a device that supports an arbitrary
  412. mapping of fixed-size regions of I/O across a fixed set of paths.
  413. The path used for any specific region can be switched dynamically
  414. by sending the target a message.
  415. To compile this code as a module, choose M here: the module will
  416. be called dm-switch.
  417. If unsure, say N.
  418. config DM_LOG_WRITES
  419. tristate "Log writes target support"
  420. depends on BLK_DEV_DM
  421. ---help---
  422. This device-mapper target takes two devices, one device to use
  423. normally, one to log all write operations done to the first device.
  424. This is for use by file system developers wishing to verify that
  425. their fs is writing a consistent file system at all times by allowing
  426. them to replay the log in a variety of ways and to check the
  427. contents.
  428. To compile this code as a module, choose M here: the module will
  429. be called dm-log-writes.
  430. If unsure, say N.
  431. config DM_INTEGRITY
  432. tristate "Integrity target support"
  433. depends on BLK_DEV_DM
  434. select BLK_DEV_INTEGRITY
  435. select DM_BUFIO
  436. select CRYPTO
  437. select ASYNC_XOR
  438. ---help---
  439. This device-mapper target emulates a block device that has
  440. additional per-sector tags that can be used for storing
  441. integrity information.
  442. This integrity target is used with the dm-crypt target to
  443. provide authenticated disk encryption or it can be used
  444. standalone.
  445. To compile this code as a module, choose M here: the module will
  446. be called dm-integrity.
  447. config DM_ZONED
  448. tristate "Drive-managed zoned block device target support"
  449. depends on BLK_DEV_DM
  450. depends on BLK_DEV_ZONED
  451. ---help---
  452. This device-mapper target takes a host-managed or host-aware zoned
  453. block device and exposes most of its capacity as a regular block
  454. device (drive-managed zoned block device) without any write
  455. constraints. This is mainly intended for use with file systems that
  456. do not natively support zoned block devices but still want to
  457. benefit from the increased capacity offered by SMR disks. Other uses
  458. by applications using raw block devices (for example object stores)
  459. are also possible.
  460. To compile this code as a module, choose M here: the module will
  461. be called dm-zoned.
  462. If unsure, say N.
  463. endif # MD