tlb.txt 3.6 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
  1. When the kernel unmaps or modified the attributes of a range of
  2. memory, it has two choices:
  3. 1. Flush the entire TLB with a two-instruction sequence. This is
  4. a quick operation, but it causes collateral damage: TLB entries
  5. from areas other than the one we are trying to flush will be
  6. destroyed and must be refilled later, at some cost.
  7. 2. Use the invlpg instruction to invalidate a single page at a
  8. time. This could potentially cost many more instructions, but
  9. it is a much more precise operation, causing no collateral
  10. damage to other TLB entries.
  11. Which method to do depends on a few things:
  12. 1. The size of the flush being performed. A flush of the entire
  13. address space is obviously better performed by flushing the
  14. entire TLB than doing 2^48/PAGE_SIZE individual flushes.
  15. 2. The contents of the TLB. If the TLB is empty, then there will
  16. be no collateral damage caused by doing the global flush, and
  17. all of the individual flush will have ended up being wasted
  18. work.
  19. 3. The size of the TLB. The larger the TLB, the more collateral
  20. damage we do with a full flush. So, the larger the TLB, the
  21. more attractive an individual flush looks. Data and
  22. instructions have separate TLBs, as do different page sizes.
  23. 4. The microarchitecture. The TLB has become a multi-level
  24. cache on modern CPUs, and the global flushes have become more
  25. expensive relative to single-page flushes.
  26. There is obviously no way the kernel can know all these things,
  27. especially the contents of the TLB during a given flush. The
  28. sizes of the flush will vary greatly depending on the workload as
  29. well. There is essentially no "right" point to choose.
  30. You may be doing too many individual invalidations if you see the
  31. invlpg instruction (or instructions _near_ it) show up high in
  32. profiles. If you believe that individual invalidations being
  33. called too often, you can lower the tunable:
  34. /sys/kernel/debug/x86/tlb_single_page_flush_ceiling
  35. This will cause us to do the global flush for more cases.
  36. Lowering it to 0 will disable the use of the individual flushes.
  37. Setting it to 1 is a very conservative setting and it should
  38. never need to be 0 under normal circumstances.
  39. Despite the fact that a single individual flush on x86 is
  40. guaranteed to flush a full 2MB [1], hugetlbfs always uses the full
  41. flushes. THP is treated exactly the same as normal memory.
  42. You might see invlpg inside of flush_tlb_mm_range() show up in
  43. profiles, or you can use the trace_tlb_flush() tracepoints. to
  44. determine how long the flush operations are taking.
  45. Essentially, you are balancing the cycles you spend doing invlpg
  46. with the cycles that you spend refilling the TLB later.
  47. You can measure how expensive TLB refills are by using
  48. performance counters and 'perf stat', like this:
  49. perf stat -e
  50. cpu/event=0x8,umask=0x84,name=dtlb_load_misses_walk_duration/,
  51. cpu/event=0x8,umask=0x82,name=dtlb_load_misses_walk_completed/,
  52. cpu/event=0x49,umask=0x4,name=dtlb_store_misses_walk_duration/,
  53. cpu/event=0x49,umask=0x2,name=dtlb_store_misses_walk_completed/,
  54. cpu/event=0x85,umask=0x4,name=itlb_misses_walk_duration/,
  55. cpu/event=0x85,umask=0x2,name=itlb_misses_walk_completed/
  56. That works on an IvyBridge-era CPU (i5-3320M). Different CPUs
  57. may have differently-named counters, but they should at least
  58. be there in some form. You can use pmu-tools 'ocperf list'
  59. (https://github.com/andikleen/pmu-tools) to find the right
  60. counters for a given CPU.
  61. 1. A footnote in Intel's SDM "4.10.4.2 Recommended Invalidation"
  62. says: "One execution of INVLPG is sufficient even for a page
  63. with size greater than 4 KBytes."