recipes.txt 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571
  1. This document provides "recipes", that is, litmus tests for commonly
  2. occurring situations, as well as a few that illustrate subtly broken but
  3. attractive nuisances. Many of these recipes include example code from
  4. v4.13 of the Linux kernel.
  5. The first section covers simple special cases, the second section
  6. takes off the training wheels to cover more involved examples,
  7. and the third section provides a few rules of thumb.
  8. Simple special cases
  9. ====================
  10. This section presents two simple special cases, the first being where
  11. there is only one CPU or only one memory location is accessed, and the
  12. second being use of that old concurrency workhorse, locking.
  13. Single CPU or single memory location
  14. ------------------------------------
  15. If there is only one CPU on the one hand or only one variable
  16. on the other, the code will execute in order. There are (as
  17. usual) some things to be careful of:
  18. 1. Some aspects of the C language are unordered. For example,
  19. in the expression "f(x) + g(y)", the order in which f and g are
  20. called is not defined; the object code is allowed to use either
  21. order or even to interleave the computations.
  22. 2. Compilers are permitted to use the "as-if" rule. That is, a
  23. compiler can emit whatever code it likes for normal accesses,
  24. as long as the results of a single-threaded execution appear
  25. just as if the compiler had followed all the relevant rules.
  26. To see this, compile with a high level of optimization and run
  27. the debugger on the resulting binary.
  28. 3. If there is only one variable but multiple CPUs, that variable
  29. must be properly aligned and all accesses to that variable must
  30. be full sized. Variables that straddle cachelines or pages void
  31. your full-ordering warranty, as do undersized accesses that load
  32. from or store to only part of the variable.
  33. 4. If there are multiple CPUs, accesses to shared variables should
  34. use READ_ONCE() and WRITE_ONCE() or stronger to prevent load/store
  35. tearing, load/store fusing, and invented loads and stores.
  36. There are exceptions to this rule, including:
  37. i. When there is no possibility of a given shared variable
  38. being updated by some other CPU, for example, while
  39. holding the update-side lock, reads from that variable
  40. need not use READ_ONCE().
  41. ii. When there is no possibility of a given shared variable
  42. being either read or updated by other CPUs, for example,
  43. when running during early boot, reads from that variable
  44. need not use READ_ONCE() and writes to that variable
  45. need not use WRITE_ONCE().
  46. Locking
  47. -------
  48. Locking is well-known and straightforward, at least if you don't think
  49. about it too hard. And the basic rule is indeed quite simple: Any CPU that
  50. has acquired a given lock sees any changes previously seen or made by any
  51. CPU before it released that same lock. Note that this statement is a bit
  52. stronger than "Any CPU holding a given lock sees all changes made by any
  53. CPU during the time that CPU was holding this same lock". For example,
  54. consider the following pair of code fragments:
  55. /* See MP+polocks.litmus. */
  56. void CPU0(void)
  57. {
  58. WRITE_ONCE(x, 1);
  59. spin_lock(&mylock);
  60. WRITE_ONCE(y, 1);
  61. spin_unlock(&mylock);
  62. }
  63. void CPU1(void)
  64. {
  65. spin_lock(&mylock);
  66. r0 = READ_ONCE(y);
  67. spin_unlock(&mylock);
  68. r1 = READ_ONCE(x);
  69. }
  70. The basic rule guarantees that if CPU0() acquires mylock before CPU1(),
  71. then both r0 and r1 must be set to the value 1. This also has the
  72. consequence that if the final value of r0 is equal to 1, then the final
  73. value of r1 must also be equal to 1. In contrast, the weaker rule would
  74. say nothing about the final value of r1.
  75. The converse to the basic rule also holds, as illustrated by the
  76. following litmus test:
  77. /* See MP+porevlocks.litmus. */
  78. void CPU0(void)
  79. {
  80. r0 = READ_ONCE(y);
  81. spin_lock(&mylock);
  82. r1 = READ_ONCE(x);
  83. spin_unlock(&mylock);
  84. }
  85. void CPU1(void)
  86. {
  87. spin_lock(&mylock);
  88. WRITE_ONCE(x, 1);
  89. spin_unlock(&mylock);
  90. WRITE_ONCE(y, 1);
  91. }
  92. This converse to the basic rule guarantees that if CPU0() acquires
  93. mylock before CPU1(), then both r0 and r1 must be set to the value 0.
  94. This also has the consequence that if the final value of r1 is equal
  95. to 0, then the final value of r0 must also be equal to 0. In contrast,
  96. the weaker rule would say nothing about the final value of r0.
  97. These examples show only a single pair of CPUs, but the effects of the
  98. locking basic rule extend across multiple acquisitions of a given lock
  99. across multiple CPUs.
  100. However, it is not necessarily the case that accesses ordered by
  101. locking will be seen as ordered by CPUs not holding that lock.
  102. Consider this example:
  103. /* See Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus. */
  104. void CPU0(void)
  105. {
  106. spin_lock(&mylock);
  107. WRITE_ONCE(x, 1);
  108. WRITE_ONCE(y, 1);
  109. spin_unlock(&mylock);
  110. }
  111. void CPU1(void)
  112. {
  113. spin_lock(&mylock);
  114. r0 = READ_ONCE(y);
  115. WRITE_ONCE(z, 1);
  116. spin_unlock(&mylock);
  117. }
  118. void CPU2(void)
  119. {
  120. WRITE_ONCE(z, 2);
  121. smp_mb();
  122. r1 = READ_ONCE(x);
  123. }
  124. Counter-intuitive though it might be, it is quite possible to have
  125. the final value of r0 be 1, the final value of z be 2, and the final
  126. value of r1 be 0. The reason for this surprising outcome is that
  127. CPU2() never acquired the lock, and thus did not benefit from the
  128. lock's ordering properties.
  129. Ordering can be extended to CPUs not holding the lock by careful use
  130. of smp_mb__after_spinlock():
  131. /* See Z6.0+pooncelock+poonceLock+pombonce.litmus. */
  132. void CPU0(void)
  133. {
  134. spin_lock(&mylock);
  135. WRITE_ONCE(x, 1);
  136. WRITE_ONCE(y, 1);
  137. spin_unlock(&mylock);
  138. }
  139. void CPU1(void)
  140. {
  141. spin_lock(&mylock);
  142. smp_mb__after_spinlock();
  143. r0 = READ_ONCE(y);
  144. WRITE_ONCE(z, 1);
  145. spin_unlock(&mylock);
  146. }
  147. void CPU2(void)
  148. {
  149. WRITE_ONCE(z, 2);
  150. smp_mb();
  151. r1 = READ_ONCE(x);
  152. }
  153. This addition of smp_mb__after_spinlock() strengthens the lock acquisition
  154. sufficiently to rule out the counter-intuitive outcome.
  155. Taking off the training wheels
  156. ==============================
  157. This section looks at more complex examples, including message passing,
  158. load buffering, release-acquire chains, store buffering.
  159. Many classes of litmus tests have abbreviated names, which may be found
  160. here: https://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test6.pdf
  161. Message passing (MP)
  162. --------------------
  163. The MP pattern has one CPU execute a pair of stores to a pair of variables
  164. and another CPU execute a pair of loads from this same pair of variables,
  165. but in the opposite order. The goal is to avoid the counter-intuitive
  166. outcome in which the first load sees the value written by the second store
  167. but the second load does not see the value written by the first store.
  168. In the absence of any ordering, this goal may not be met, as can be seen
  169. in the MP+poonceonces.litmus litmus test. This section therefore looks at
  170. a number of ways of meeting this goal.
  171. Release and acquire
  172. ~~~~~~~~~~~~~~~~~~~
  173. Use of smp_store_release() and smp_load_acquire() is one way to force
  174. the desired MP ordering. The general approach is shown below:
  175. /* See MP+pooncerelease+poacquireonce.litmus. */
  176. void CPU0(void)
  177. {
  178. WRITE_ONCE(x, 1);
  179. smp_store_release(&y, 1);
  180. }
  181. void CPU1(void)
  182. {
  183. r0 = smp_load_acquire(&y);
  184. r1 = READ_ONCE(x);
  185. }
  186. The smp_store_release() macro orders any prior accesses against the
  187. store, while the smp_load_acquire macro orders the load against any
  188. subsequent accesses. Therefore, if the final value of r0 is the value 1,
  189. the final value of r1 must also be the value 1.
  190. The init_stack_slab() function in lib/stackdepot.c uses release-acquire
  191. in this way to safely initialize of a slab of the stack. Working out
  192. the mutual-exclusion design is left as an exercise for the reader.
  193. Assign and dereference
  194. ~~~~~~~~~~~~~~~~~~~~~~
  195. Use of rcu_assign_pointer() and rcu_dereference() is quite similar to the
  196. use of smp_store_release() and smp_load_acquire(), except that both
  197. rcu_assign_pointer() and rcu_dereference() operate on RCU-protected
  198. pointers. The general approach is shown below:
  199. /* See MP+onceassign+derefonce.litmus. */
  200. int z;
  201. int *y = &z;
  202. int x;
  203. void CPU0(void)
  204. {
  205. WRITE_ONCE(x, 1);
  206. rcu_assign_pointer(y, &x);
  207. }
  208. void CPU1(void)
  209. {
  210. rcu_read_lock();
  211. r0 = rcu_dereference(y);
  212. r1 = READ_ONCE(*r0);
  213. rcu_read_unlock();
  214. }
  215. In this example, if the final value of r0 is &x then the final value of
  216. r1 must be 1.
  217. The rcu_assign_pointer() macro has the same ordering properties as does
  218. smp_store_release(), but the rcu_dereference() macro orders the load only
  219. against later accesses that depend on the value loaded. A dependency
  220. is present if the value loaded determines the address of a later access
  221. (address dependency, as shown above), the value written by a later store
  222. (data dependency), or whether or not a later store is executed in the
  223. first place (control dependency). Note that the term "data dependency"
  224. is sometimes casually used to cover both address and data dependencies.
  225. In lib/prime_numbers.c, the expand_to_next_prime() function invokes
  226. rcu_assign_pointer(), and the next_prime_number() function invokes
  227. rcu_dereference(). This combination mediates access to a bit vector
  228. that is expanded as additional primes are needed.
  229. Write and read memory barriers
  230. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  231. It is usually better to use smp_store_release() instead of smp_wmb()
  232. and to use smp_load_acquire() instead of smp_rmb(). However, the older
  233. smp_wmb() and smp_rmb() APIs are still heavily used, so it is important
  234. to understand their use cases. The general approach is shown below:
  235. /* See MP+fencewmbonceonce+fencermbonceonce.litmus. */
  236. void CPU0(void)
  237. {
  238. WRITE_ONCE(x, 1);
  239. smp_wmb();
  240. WRITE_ONCE(y, 1);
  241. }
  242. void CPU1(void)
  243. {
  244. r0 = READ_ONCE(y);
  245. smp_rmb();
  246. r1 = READ_ONCE(x);
  247. }
  248. The smp_wmb() macro orders prior stores against later stores, and the
  249. smp_rmb() macro orders prior loads against later loads. Therefore, if
  250. the final value of r0 is 1, the final value of r1 must also be 1.
  251. The the xlog_state_switch_iclogs() function in fs/xfs/xfs_log.c contains
  252. the following write-side code fragment:
  253. log->l_curr_block -= log->l_logBBsize;
  254. ASSERT(log->l_curr_block >= 0);
  255. smp_wmb();
  256. log->l_curr_cycle++;
  257. And the xlog_valid_lsn() function in fs/xfs/xfs_log_priv.h contains
  258. the corresponding read-side code fragment:
  259. cur_cycle = READ_ONCE(log->l_curr_cycle);
  260. smp_rmb();
  261. cur_block = READ_ONCE(log->l_curr_block);
  262. Alternatively, consider the following comment in function
  263. perf_output_put_handle() in kernel/events/ring_buffer.c:
  264. * kernel user
  265. *
  266. * if (LOAD ->data_tail) { LOAD ->data_head
  267. * (A) smp_rmb() (C)
  268. * STORE $data LOAD $data
  269. * smp_wmb() (B) smp_mb() (D)
  270. * STORE ->data_head STORE ->data_tail
  271. * }
  272. The B/C pairing is an example of the MP pattern using smp_wmb() on the
  273. write side and smp_rmb() on the read side.
  274. Of course, given that smp_mb() is strictly stronger than either smp_wmb()
  275. or smp_rmb(), any code fragment that would work with smp_rmb() and
  276. smp_wmb() would also work with smp_mb() replacing either or both of the
  277. weaker barriers.
  278. Load buffering (LB)
  279. -------------------
  280. The LB pattern has one CPU load from one variable and then store to a
  281. second, while another CPU loads from the second variable and then stores
  282. to the first. The goal is to avoid the counter-intuitive situation where
  283. each load reads the value written by the other CPU's store. In the
  284. absence of any ordering it is quite possible that this may happen, as
  285. can be seen in the LB+poonceonces.litmus litmus test.
  286. One way of avoiding the counter-intuitive outcome is through the use of a
  287. control dependency paired with a full memory barrier:
  288. /* See LB+fencembonceonce+ctrlonceonce.litmus. */
  289. void CPU0(void)
  290. {
  291. r0 = READ_ONCE(x);
  292. if (r0)
  293. WRITE_ONCE(y, 1);
  294. }
  295. void CPU1(void)
  296. {
  297. r1 = READ_ONCE(y);
  298. smp_mb();
  299. WRITE_ONCE(x, 1);
  300. }
  301. This pairing of a control dependency in CPU0() with a full memory
  302. barrier in CPU1() prevents r0 and r1 from both ending up equal to 1.
  303. The A/D pairing from the ring-buffer use case shown earlier also
  304. illustrates LB. Here is a repeat of the comment in
  305. perf_output_put_handle() in kernel/events/ring_buffer.c, showing a
  306. control dependency on the kernel side and a full memory barrier on
  307. the user side:
  308. * kernel user
  309. *
  310. * if (LOAD ->data_tail) { LOAD ->data_head
  311. * (A) smp_rmb() (C)
  312. * STORE $data LOAD $data
  313. * smp_wmb() (B) smp_mb() (D)
  314. * STORE ->data_head STORE ->data_tail
  315. * }
  316. *
  317. * Where A pairs with D, and B pairs with C.
  318. The kernel's control dependency between the load from ->data_tail
  319. and the store to data combined with the user's full memory barrier
  320. between the load from data and the store to ->data_tail prevents
  321. the counter-intuitive outcome where the kernel overwrites the data
  322. before the user gets done loading it.
  323. Release-acquire chains
  324. ----------------------
  325. Release-acquire chains are a low-overhead, flexible, and easy-to-use
  326. method of maintaining order. However, they do have some limitations that
  327. need to be fully understood. Here is an example that maintains order:
  328. /* See ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus. */
  329. void CPU0(void)
  330. {
  331. WRITE_ONCE(x, 1);
  332. smp_store_release(&y, 1);
  333. }
  334. void CPU1(void)
  335. {
  336. r0 = smp_load_acquire(y);
  337. smp_store_release(&z, 1);
  338. }
  339. void CPU2(void)
  340. {
  341. r1 = smp_load_acquire(z);
  342. r2 = READ_ONCE(x);
  343. }
  344. In this case, if r0 and r1 both have final values of 1, then r2 must
  345. also have a final value of 1.
  346. The ordering in this example is stronger than it needs to be. For
  347. example, ordering would still be preserved if CPU1()'s smp_load_acquire()
  348. invocation was replaced with READ_ONCE().
  349. It is tempting to assume that CPU0()'s store to x is globally ordered
  350. before CPU1()'s store to z, but this is not the case:
  351. /* See Z6.0+pooncerelease+poacquirerelease+mbonceonce.litmus. */
  352. void CPU0(void)
  353. {
  354. WRITE_ONCE(x, 1);
  355. smp_store_release(&y, 1);
  356. }
  357. void CPU1(void)
  358. {
  359. r0 = smp_load_acquire(y);
  360. smp_store_release(&z, 1);
  361. }
  362. void CPU2(void)
  363. {
  364. WRITE_ONCE(z, 2);
  365. smp_mb();
  366. r1 = READ_ONCE(x);
  367. }
  368. One might hope that if the final value of r0 is 1 and the final value
  369. of z is 2, then the final value of r1 must also be 1, but it really is
  370. possible for r1 to have the final value of 0. The reason, of course,
  371. is that in this version, CPU2() is not part of the release-acquire chain.
  372. This situation is accounted for in the rules of thumb below.
  373. Despite this limitation, release-acquire chains are low-overhead as
  374. well as simple and powerful, at least as memory-ordering mechanisms go.
  375. Store buffering
  376. ---------------
  377. Store buffering can be thought of as upside-down load buffering, so
  378. that one CPU first stores to one variable and then loads from a second,
  379. while another CPU stores to the second variable and then loads from the
  380. first. Preserving order requires nothing less than full barriers:
  381. /* See SB+fencembonceonces.litmus. */
  382. void CPU0(void)
  383. {
  384. WRITE_ONCE(x, 1);
  385. smp_mb();
  386. r0 = READ_ONCE(y);
  387. }
  388. void CPU1(void)
  389. {
  390. WRITE_ONCE(y, 1);
  391. smp_mb();
  392. r1 = READ_ONCE(x);
  393. }
  394. Omitting either smp_mb() will allow both r0 and r1 to have final
  395. values of 0, but providing both full barriers as shown above prevents
  396. this counter-intuitive outcome.
  397. This pattern most famously appears as part of Dekker's locking
  398. algorithm, but it has a much more practical use within the Linux kernel
  399. of ordering wakeups. The following comment taken from waitqueue_active()
  400. in include/linux/wait.h shows the canonical pattern:
  401. * CPU0 - waker CPU1 - waiter
  402. *
  403. * for (;;) {
  404. * @cond = true; prepare_to_wait(&wq_head, &wait, state);
  405. * smp_mb(); // smp_mb() from set_current_state()
  406. * if (waitqueue_active(wq_head)) if (@cond)
  407. * wake_up(wq_head); break;
  408. * schedule();
  409. * }
  410. * finish_wait(&wq_head, &wait);
  411. On CPU0, the store is to @cond and the load is in waitqueue_active().
  412. On CPU1, prepare_to_wait() contains both a store to wq_head and a call
  413. to set_current_state(), which contains an smp_mb() barrier; the load is
  414. "if (@cond)". The full barriers prevent the undesirable outcome where
  415. CPU1 puts the waiting task to sleep and CPU0 fails to wake it up.
  416. Note that use of locking can greatly simplify this pattern.
  417. Rules of thumb
  418. ==============
  419. There might seem to be no pattern governing what ordering primitives are
  420. needed in which situations, but this is not the case. There is a pattern
  421. based on the relation between the accesses linking successive CPUs in a
  422. given litmus test. There are three types of linkage:
  423. 1. Write-to-read, where the next CPU reads the value that the
  424. previous CPU wrote. The LB litmus-test patterns contain only
  425. this type of relation. In formal memory-modeling texts, this
  426. relation is called "reads-from" and is usually abbreviated "rf".
  427. 2. Read-to-write, where the next CPU overwrites the value that the
  428. previous CPU read. The SB litmus test contains only this type
  429. of relation. In formal memory-modeling texts, this relation is
  430. often called "from-reads" and is sometimes abbreviated "fr".
  431. 3. Write-to-write, where the next CPU overwrites the value written
  432. by the previous CPU. The Z6.0 litmus test pattern contains a
  433. write-to-write relation between the last access of CPU1() and
  434. the first access of CPU2(). In formal memory-modeling texts,
  435. this relation is often called "coherence order" and is sometimes
  436. abbreviated "co". In the C++ standard, it is instead called
  437. "modification order" and often abbreviated "mo".
  438. The strength of memory ordering required for a given litmus test to
  439. avoid a counter-intuitive outcome depends on the types of relations
  440. linking the memory accesses for the outcome in question:
  441. o If all links are write-to-read links, then the weakest
  442. possible ordering within each CPU suffices. For example, in
  443. the LB litmus test, a control dependency was enough to do the
  444. job.
  445. o If all but one of the links are write-to-read links, then a
  446. release-acquire chain suffices. Both the MP and the ISA2
  447. litmus tests illustrate this case.
  448. o If more than one of the links are something other than
  449. write-to-read links, then a full memory barrier is required
  450. between each successive pair of non-write-to-read links. This
  451. case is illustrated by the Z6.0 litmus tests, both in the
  452. locking and in the release-acquire sections.
  453. However, if you find yourself having to stretch these rules of thumb
  454. to fit your situation, you should consider creating a litmus test and
  455. running it on the model.