Kconfig 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505
  1. #
  2. # Block device driver configuration
  3. #
  4. menuconfig MD
  5. bool "Multiple devices driver support (RAID and LVM)"
  6. depends on BLOCK
  7. select SRCU
  8. help
  9. Support multiple physical spindles through a single logical device.
  10. Required for RAID and logical volume management.
  11. if MD
  12. config BLK_DEV_MD
  13. tristate "RAID support"
  14. ---help---
  15. This driver lets you combine several hard disk partitions into one
  16. logical block device. This can be used to simply append one
  17. partition to another one or to combine several redundant hard disks
  18. into a RAID1/4/5 device so as to provide protection against hard
  19. disk failures. This is called "Software RAID" since the combining of
  20. the partitions is done by the kernel. "Hardware RAID" means that the
  21. combining is done by a dedicated controller; if you have such a
  22. controller, you do not need to say Y here.
  23. More information about Software RAID on Linux is contained in the
  24. Software RAID mini-HOWTO, available from
  25. <http://www.tldp.org/docs.html#howto>. There you will also learn
  26. where to get the supporting user space utilities raidtools.
  27. If unsure, say N.
  28. config MD_AUTODETECT
  29. bool "Autodetect RAID arrays during kernel boot"
  30. depends on BLK_DEV_MD=y
  31. default y
  32. ---help---
  33. If you say Y here, then the kernel will try to autodetect raid
  34. arrays as part of its boot process.
  35. If you don't use raid and say Y, this autodetection can cause
  36. a several-second delay in the boot time due to various
  37. synchronisation steps that are part of this step.
  38. If unsure, say Y.
  39. config MD_LINEAR
  40. tristate "Linear (append) mode"
  41. depends on BLK_DEV_MD
  42. ---help---
  43. If you say Y here, then your multiple devices driver will be able to
  44. use the so-called linear mode, i.e. it will combine the hard disk
  45. partitions by simply appending one to the other.
  46. To compile this as a module, choose M here: the module
  47. will be called linear.
  48. If unsure, say Y.
  49. config MD_RAID0
  50. tristate "RAID-0 (striping) mode"
  51. depends on BLK_DEV_MD
  52. ---help---
  53. If you say Y here, then your multiple devices driver will be able to
  54. use the so-called raid0 mode, i.e. it will combine the hard disk
  55. partitions into one logical device in such a fashion as to fill them
  56. up evenly, one chunk here and one chunk there. This will increase
  57. the throughput rate if the partitions reside on distinct disks.
  58. Information about Software RAID on Linux is contained in the
  59. Software-RAID mini-HOWTO, available from
  60. <http://www.tldp.org/docs.html#howto>. There you will also
  61. learn where to get the supporting user space utilities raidtools.
  62. To compile this as a module, choose M here: the module
  63. will be called raid0.
  64. If unsure, say Y.
  65. config MD_RAID1
  66. tristate "RAID-1 (mirroring) mode"
  67. depends on BLK_DEV_MD
  68. ---help---
  69. A RAID-1 set consists of several disk drives which are exact copies
  70. of each other. In the event of a mirror failure, the RAID driver
  71. will continue to use the operational mirrors in the set, providing
  72. an error free MD (multiple device) to the higher levels of the
  73. kernel. In a set with N drives, the available space is the capacity
  74. of a single drive, and the set protects against a failure of (N - 1)
  75. drives.
  76. Information about Software RAID on Linux is contained in the
  77. Software-RAID mini-HOWTO, available from
  78. <http://www.tldp.org/docs.html#howto>. There you will also
  79. learn where to get the supporting user space utilities raidtools.
  80. If you want to use such a RAID-1 set, say Y. To compile this code
  81. as a module, choose M here: the module will be called raid1.
  82. If unsure, say Y.
  83. config MD_RAID10
  84. tristate "RAID-10 (mirrored striping) mode"
  85. depends on BLK_DEV_MD
  86. ---help---
  87. RAID-10 provides a combination of striping (RAID-0) and
  88. mirroring (RAID-1) with easier configuration and more flexible
  89. layout.
  90. Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
  91. be the same size (or at least, only as much as the smallest device
  92. will be used).
  93. RAID-10 provides a variety of layouts that provide different levels
  94. of redundancy and performance.
  95. RAID-10 requires mdadm-1.7.0 or later, available at:
  96. ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
  97. If unsure, say Y.
  98. config MD_RAID456
  99. tristate "RAID-4/RAID-5/RAID-6 mode"
  100. depends on BLK_DEV_MD
  101. select RAID6_PQ
  102. select LIBCRC32C
  103. select ASYNC_MEMCPY
  104. select ASYNC_XOR
  105. select ASYNC_PQ
  106. select ASYNC_RAID6_RECOV
  107. ---help---
  108. A RAID-5 set of N drives with a capacity of C MB per drive provides
  109. the capacity of C * (N - 1) MB, and protects against a failure
  110. of a single drive. For a given sector (row) number, (N - 1) drives
  111. contain data sectors, and one drive contains the parity protection.
  112. For a RAID-4 set, the parity blocks are present on a single drive,
  113. while a RAID-5 set distributes the parity across the drives in one
  114. of the available parity distribution methods.
  115. A RAID-6 set of N drives with a capacity of C MB per drive
  116. provides the capacity of C * (N - 2) MB, and protects
  117. against a failure of any two drives. For a given sector
  118. (row) number, (N - 2) drives contain data sectors, and two
  119. drives contains two independent redundancy syndromes. Like
  120. RAID-5, RAID-6 distributes the syndromes across the drives
  121. in one of the available parity distribution methods.
  122. Information about Software RAID on Linux is contained in the
  123. Software-RAID mini-HOWTO, available from
  124. <http://www.tldp.org/docs.html#howto>. There you will also
  125. learn where to get the supporting user space utilities raidtools.
  126. If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
  127. compile this code as a module, choose M here: the module
  128. will be called raid456.
  129. If unsure, say Y.
  130. config MD_MULTIPATH
  131. tristate "Multipath I/O support"
  132. depends on BLK_DEV_MD
  133. help
  134. MD_MULTIPATH provides a simple multi-path personality for use
  135. the MD framework. It is not under active development. New
  136. projects should consider using DM_MULTIPATH which has more
  137. features and more testing.
  138. If unsure, say N.
  139. config MD_FAULTY
  140. tristate "Faulty test module for MD"
  141. depends on BLK_DEV_MD
  142. help
  143. The "faulty" module allows for a block device that occasionally returns
  144. read or write errors. It is useful for testing.
  145. In unsure, say N.
  146. config MD_CLUSTER
  147. tristate "Cluster Support for MD (EXPERIMENTAL)"
  148. depends on BLK_DEV_MD
  149. depends on DLM
  150. default n
  151. ---help---
  152. Clustering support for MD devices. This enables locking and
  153. synchronization across multiple systems on the cluster, so all
  154. nodes in the cluster can access the MD devices simultaneously.
  155. This brings the redundancy (and uptime) of RAID levels across the
  156. nodes of the cluster.
  157. If unsure, say N.
  158. source "drivers/md/bcache/Kconfig"
  159. config BLK_DEV_DM_BUILTIN
  160. bool
  161. config BLK_DEV_DM
  162. tristate "Device mapper support"
  163. select BLK_DEV_DM_BUILTIN
  164. ---help---
  165. Device-mapper is a low level volume manager. It works by allowing
  166. people to specify mappings for ranges of logical sectors. Various
  167. mapping types are available, in addition people may write their own
  168. modules containing custom mappings if they wish.
  169. Higher level volume managers such as LVM2 use this driver.
  170. To compile this as a module, choose M here: the module will be
  171. called dm-mod.
  172. If unsure, say N.
  173. config DM_MQ_DEFAULT
  174. bool "request-based DM: use blk-mq I/O path by default"
  175. depends on BLK_DEV_DM
  176. ---help---
  177. This option enables the blk-mq based I/O path for request-based
  178. DM devices by default. With the option the dm_mod.use_blk_mq
  179. module/boot option defaults to Y, without it to N, but it can
  180. still be overriden either way.
  181. If unsure say N.
  182. config DM_DEBUG
  183. bool "Device mapper debugging support"
  184. depends on BLK_DEV_DM
  185. ---help---
  186. Enable this for messages that may help debug device-mapper problems.
  187. If unsure, say N.
  188. config DM_BUFIO
  189. tristate
  190. depends on BLK_DEV_DM
  191. ---help---
  192. This interface allows you to do buffered I/O on a device and acts
  193. as a cache, holding recently-read blocks in memory and performing
  194. delayed writes.
  195. config DM_DEBUG_BLOCK_STACK_TRACING
  196. bool "Keep stack trace of persistent data block lock holders"
  197. depends on STACKTRACE_SUPPORT && DM_BUFIO
  198. select STACKTRACE
  199. ---help---
  200. Enable this for messages that may help debug problems with the
  201. block manager locking used by thin provisioning and caching.
  202. If unsure, say N.
  203. config DM_BIO_PRISON
  204. tristate
  205. depends on BLK_DEV_DM
  206. ---help---
  207. Some bio locking schemes used by other device-mapper targets
  208. including thin provisioning.
  209. source "drivers/md/persistent-data/Kconfig"
  210. config DM_CRYPT
  211. tristate "Crypt target support"
  212. depends on BLK_DEV_DM
  213. select CRYPTO
  214. select CRYPTO_CBC
  215. ---help---
  216. This device-mapper target allows you to create a device that
  217. transparently encrypts the data on it. You'll need to activate
  218. the ciphers you're going to use in the cryptoapi configuration.
  219. For further information on dm-crypt and userspace tools see:
  220. <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
  221. To compile this code as a module, choose M here: the module will
  222. be called dm-crypt.
  223. If unsure, say N.
  224. config DM_SNAPSHOT
  225. tristate "Snapshot target"
  226. depends on BLK_DEV_DM
  227. select DM_BUFIO
  228. ---help---
  229. Allow volume managers to take writable snapshots of a device.
  230. config DM_THIN_PROVISIONING
  231. tristate "Thin provisioning target"
  232. depends on BLK_DEV_DM
  233. select DM_PERSISTENT_DATA
  234. select DM_BIO_PRISON
  235. ---help---
  236. Provides thin provisioning and snapshots that share a data store.
  237. config DM_CACHE
  238. tristate "Cache target (EXPERIMENTAL)"
  239. depends on BLK_DEV_DM
  240. default n
  241. select DM_PERSISTENT_DATA
  242. select DM_BIO_PRISON
  243. ---help---
  244. dm-cache attempts to improve performance of a block device by
  245. moving frequently used data to a smaller, higher performance
  246. device. Different 'policy' plugins can be used to change the
  247. algorithms used to select which blocks are promoted, demoted,
  248. cleaned etc. It supports writeback and writethrough modes.
  249. config DM_CACHE_SMQ
  250. tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
  251. depends on DM_CACHE
  252. default y
  253. ---help---
  254. A cache policy that uses a multiqueue ordered by recent hits
  255. to select which blocks should be promoted and demoted.
  256. This is meant to be a general purpose policy. It prioritises
  257. reads over writes. This SMQ policy (vs MQ) offers the promise
  258. of less memory utilization, improved performance and increased
  259. adaptability in the face of changing workloads.
  260. config DM_CACHE_CLEANER
  261. tristate "Cleaner Cache Policy (EXPERIMENTAL)"
  262. depends on DM_CACHE
  263. default y
  264. ---help---
  265. A simple cache policy that writes back all data to the
  266. origin. Used when decommissioning a dm-cache.
  267. config DM_ERA
  268. tristate "Era target (EXPERIMENTAL)"
  269. depends on BLK_DEV_DM
  270. default n
  271. select DM_PERSISTENT_DATA
  272. select DM_BIO_PRISON
  273. ---help---
  274. dm-era tracks which parts of a block device are written to
  275. over time. Useful for maintaining cache coherency when using
  276. vendor snapshots.
  277. config DM_MIRROR
  278. tristate "Mirror target"
  279. depends on BLK_DEV_DM
  280. ---help---
  281. Allow volume managers to mirror logical volumes, also
  282. needed for live data migration tools such as 'pvmove'.
  283. config DM_LOG_USERSPACE
  284. tristate "Mirror userspace logging"
  285. depends on DM_MIRROR && NET
  286. select CONNECTOR
  287. ---help---
  288. The userspace logging module provides a mechanism for
  289. relaying the dm-dirty-log API to userspace. Log designs
  290. which are more suited to userspace implementation (e.g.
  291. shared storage logs) or experimental logs can be implemented
  292. by leveraging this framework.
  293. config DM_RAID
  294. tristate "RAID 1/4/5/6/10 target"
  295. depends on BLK_DEV_DM
  296. select MD_RAID0
  297. select MD_RAID1
  298. select MD_RAID10
  299. select MD_RAID456
  300. select BLK_DEV_MD
  301. ---help---
  302. A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
  303. A RAID-5 set of N drives with a capacity of C MB per drive provides
  304. the capacity of C * (N - 1) MB, and protects against a failure
  305. of a single drive. For a given sector (row) number, (N - 1) drives
  306. contain data sectors, and one drive contains the parity protection.
  307. For a RAID-4 set, the parity blocks are present on a single drive,
  308. while a RAID-5 set distributes the parity across the drives in one
  309. of the available parity distribution methods.
  310. A RAID-6 set of N drives with a capacity of C MB per drive
  311. provides the capacity of C * (N - 2) MB, and protects
  312. against a failure of any two drives. For a given sector
  313. (row) number, (N - 2) drives contain data sectors, and two
  314. drives contains two independent redundancy syndromes. Like
  315. RAID-5, RAID-6 distributes the syndromes across the drives
  316. in one of the available parity distribution methods.
  317. config DM_ZERO
  318. tristate "Zero target"
  319. depends on BLK_DEV_DM
  320. ---help---
  321. A target that discards writes, and returns all zeroes for
  322. reads. Useful in some recovery situations.
  323. config DM_MULTIPATH
  324. tristate "Multipath target"
  325. depends on BLK_DEV_DM
  326. # nasty syntax but means make DM_MULTIPATH independent
  327. # of SCSI_DH if the latter isn't defined but if
  328. # it is, DM_MULTIPATH must depend on it. We get a build
  329. # error if SCSI_DH=m and DM_MULTIPATH=y
  330. depends on !SCSI_DH || SCSI
  331. ---help---
  332. Allow volume managers to support multipath hardware.
  333. config DM_MULTIPATH_QL
  334. tristate "I/O Path Selector based on the number of in-flight I/Os"
  335. depends on DM_MULTIPATH
  336. ---help---
  337. This path selector is a dynamic load balancer which selects
  338. the path with the least number of in-flight I/Os.
  339. If unsure, say N.
  340. config DM_MULTIPATH_ST
  341. tristate "I/O Path Selector based on the service time"
  342. depends on DM_MULTIPATH
  343. ---help---
  344. This path selector is a dynamic load balancer which selects
  345. the path expected to complete the incoming I/O in the shortest
  346. time.
  347. If unsure, say N.
  348. config DM_DELAY
  349. tristate "I/O delaying target"
  350. depends on BLK_DEV_DM
  351. ---help---
  352. A target that delays reads and/or writes and can send
  353. them to different devices. Useful for testing.
  354. If unsure, say N.
  355. config DM_UEVENT
  356. bool "DM uevents"
  357. depends on BLK_DEV_DM
  358. ---help---
  359. Generate udev events for DM events.
  360. config DM_FLAKEY
  361. tristate "Flakey target"
  362. depends on BLK_DEV_DM
  363. ---help---
  364. A target that intermittently fails I/O for debugging purposes.
  365. config DM_VERITY
  366. tristate "Verity target support"
  367. depends on BLK_DEV_DM
  368. select CRYPTO
  369. select CRYPTO_HASH
  370. select DM_BUFIO
  371. ---help---
  372. This device-mapper target creates a read-only device that
  373. transparently validates the data on one underlying device against
  374. a pre-generated tree of cryptographic checksums stored on a second
  375. device.
  376. You'll need to activate the digests you're going to use in the
  377. cryptoapi configuration.
  378. To compile this code as a module, choose M here: the module will
  379. be called dm-verity.
  380. If unsure, say N.
  381. config DM_VERITY_FEC
  382. bool "Verity forward error correction support"
  383. depends on DM_VERITY
  384. select REED_SOLOMON
  385. select REED_SOLOMON_DEC8
  386. ---help---
  387. Add forward error correction support to dm-verity. This option
  388. makes it possible to use pre-generated error correction data to
  389. recover from corrupted blocks.
  390. If unsure, say N.
  391. config DM_SWITCH
  392. tristate "Switch target support (EXPERIMENTAL)"
  393. depends on BLK_DEV_DM
  394. ---help---
  395. This device-mapper target creates a device that supports an arbitrary
  396. mapping of fixed-size regions of I/O across a fixed set of paths.
  397. The path used for any specific region can be switched dynamically
  398. by sending the target a message.
  399. To compile this code as a module, choose M here: the module will
  400. be called dm-switch.
  401. If unsure, say N.
  402. config DM_LOG_WRITES
  403. tristate "Log writes target support"
  404. depends on BLK_DEV_DM
  405. ---help---
  406. This device-mapper target takes two devices, one device to use
  407. normally, one to log all write operations done to the first device.
  408. This is for use by file system developers wishing to verify that
  409. their fs is writing a consistent file system at all times by allowing
  410. them to replay the log in a variety of ways and to check the
  411. contents.
  412. To compile this code as a module, choose M here: the module will
  413. be called dm-log-writes.
  414. If unsure, say N.
  415. endif # MD