livepatch.txt 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395
  1. =========
  2. Livepatch
  3. =========
  4. This document outlines basic information about kernel livepatching.
  5. Table of Contents:
  6. 1. Motivation
  7. 2. Kprobes, Ftrace, Livepatching
  8. 3. Consistency model
  9. 4. Livepatch module
  10. 4.1. New functions
  11. 4.2. Metadata
  12. 4.3. Livepatch module handling
  13. 5. Livepatch life-cycle
  14. 5.1. Registration
  15. 5.2. Enabling
  16. 5.3. Disabling
  17. 5.4. Unregistration
  18. 6. Sysfs
  19. 7. Limitations
  20. 1. Motivation
  21. =============
  22. There are many situations where users are reluctant to reboot a system. It may
  23. be because their system is performing complex scientific computations or under
  24. heavy load during peak usage. In addition to keeping systems up and running,
  25. users want to also have a stable and secure system. Livepatching gives users
  26. both by allowing for function calls to be redirected; thus, fixing critical
  27. functions without a system reboot.
  28. 2. Kprobes, Ftrace, Livepatching
  29. ================================
  30. There are multiple mechanisms in the Linux kernel that are directly related
  31. to redirection of code execution; namely: kernel probes, function tracing,
  32. and livepatching:
  33. + The kernel probes are the most generic. The code can be redirected by
  34. putting a breakpoint instruction instead of any instruction.
  35. + The function tracer calls the code from a predefined location that is
  36. close to the function entry point. This location is generated by the
  37. compiler using the '-pg' gcc option.
  38. + Livepatching typically needs to redirect the code at the very beginning
  39. of the function entry before the function parameters or the stack
  40. are in any way modified.
  41. All three approaches need to modify the existing code at runtime. Therefore
  42. they need to be aware of each other and not step over each other's toes.
  43. Most of these problems are solved by using the dynamic ftrace framework as
  44. a base. A Kprobe is registered as a ftrace handler when the function entry
  45. is probed, see CONFIG_KPROBES_ON_FTRACE. Also an alternative function from
  46. a live patch is called with the help of a custom ftrace handler. But there are
  47. some limitations, see below.
  48. 3. Consistency model
  49. ====================
  50. Functions are there for a reason. They take some input parameters, get or
  51. release locks, read, process, and even write some data in a defined way,
  52. have return values. In other words, each function has a defined semantic.
  53. Many fixes do not change the semantic of the modified functions. For
  54. example, they add a NULL pointer or a boundary check, fix a race by adding
  55. a missing memory barrier, or add some locking around a critical section.
  56. Most of these changes are self contained and the function presents itself
  57. the same way to the rest of the system. In this case, the functions might
  58. be updated independently one by one.
  59. But there are more complex fixes. For example, a patch might change
  60. ordering of locking in multiple functions at the same time. Or a patch
  61. might exchange meaning of some temporary structures and update
  62. all the relevant functions. In this case, the affected unit
  63. (thread, whole kernel) need to start using all new versions of
  64. the functions at the same time. Also the switch must happen only
  65. when it is safe to do so, e.g. when the affected locks are released
  66. or no data are stored in the modified structures at the moment.
  67. The theory about how to apply functions a safe way is rather complex.
  68. The aim is to define a so-called consistency model. It attempts to define
  69. conditions when the new implementation could be used so that the system
  70. stays consistent. The theory is not yet finished. See the discussion at
  71. http://thread.gmane.org/gmane.linux.kernel/1823033/focus=1828189
  72. The current consistency model is very simple. It guarantees that either
  73. the old or the new function is called. But various functions get redirected
  74. one by one without any synchronization.
  75. In other words, the current implementation _never_ modifies the behavior
  76. in the middle of the call. It is because it does _not_ rewrite the entire
  77. function in the memory. Instead, the function gets redirected at the
  78. very beginning. But this redirection is used immediately even when
  79. some other functions from the same patch have not been redirected yet.
  80. See also the section "Limitations" below.
  81. 4. Livepatch module
  82. ===================
  83. Livepatches are distributed using kernel modules, see
  84. samples/livepatch/livepatch-sample.c.
  85. The module includes a new implementation of functions that we want
  86. to replace. In addition, it defines some structures describing the
  87. relation between the original and the new implementation. Then there
  88. is code that makes the kernel start using the new code when the livepatch
  89. module is loaded. Also there is code that cleans up before the
  90. livepatch module is removed. All this is explained in more details in
  91. the next sections.
  92. 4.1. New functions
  93. ------------------
  94. New versions of functions are typically just copied from the original
  95. sources. A good practice is to add a prefix to the names so that they
  96. can be distinguished from the original ones, e.g. in a backtrace. Also
  97. they can be declared as static because they are not called directly
  98. and do not need the global visibility.
  99. The patch contains only functions that are really modified. But they
  100. might want to access functions or data from the original source file
  101. that may only be locally accessible. This can be solved by a special
  102. relocation section in the generated livepatch module, see
  103. Documentation/livepatch/module-elf-format.txt for more details.
  104. 4.2. Metadata
  105. ------------
  106. The patch is described by several structures that split the information
  107. into three levels:
  108. + struct klp_func is defined for each patched function. It describes
  109. the relation between the original and the new implementation of a
  110. particular function.
  111. The structure includes the name, as a string, of the original function.
  112. The function address is found via kallsyms at runtime.
  113. Then it includes the address of the new function. It is defined
  114. directly by assigning the function pointer. Note that the new
  115. function is typically defined in the same source file.
  116. As an optional parameter, the symbol position in the kallsyms database can
  117. be used to disambiguate functions of the same name. This is not the
  118. absolute position in the database, but rather the order it has been found
  119. only for a particular object ( vmlinux or a kernel module ). Note that
  120. kallsyms allows for searching symbols according to the object name.
  121. + struct klp_object defines an array of patched functions (struct
  122. klp_func) in the same object. Where the object is either vmlinux
  123. (NULL) or a module name.
  124. The structure helps to group and handle functions for each object
  125. together. Note that patched modules might be loaded later than
  126. the patch itself and the relevant functions might be patched
  127. only when they are available.
  128. + struct klp_patch defines an array of patched objects (struct
  129. klp_object).
  130. This structure handles all patched functions consistently and eventually,
  131. synchronously. The whole patch is applied only when all patched
  132. symbols are found. The only exception are symbols from objects
  133. (kernel modules) that have not been loaded yet. Also if a more complex
  134. consistency model is supported then a selected unit (thread,
  135. kernel as a whole) will see the new code from the entire patch
  136. only when it is in a safe state.
  137. 4.3. Livepatch module handling
  138. ------------------------------
  139. The usual behavior is that the new functions will get used when
  140. the livepatch module is loaded. For this, the module init() function
  141. has to register the patch (struct klp_patch) and enable it. See the
  142. section "Livepatch life-cycle" below for more details about these
  143. two operations.
  144. Module removal is only safe when there are no users of the underlying
  145. functions. The immediate consistency model is not able to detect this;
  146. therefore livepatch modules cannot be removed. See "Limitations" below.
  147. 5. Livepatch life-cycle
  148. =======================
  149. Livepatching defines four basic operations that define the life cycle of each
  150. live patch: registration, enabling, disabling and unregistration. There are
  151. several reasons why it is done this way.
  152. First, the patch is applied only when all patched symbols for already
  153. loaded objects are found. The error handling is much easier if this
  154. check is done before particular functions get redirected.
  155. Second, the immediate consistency model does not guarantee that anyone is not
  156. sleeping in the new code after the patch is reverted. This means that the new
  157. code needs to stay around "forever". If the code is there, one could apply it
  158. again. Therefore it makes sense to separate the operations that might be done
  159. once and those that need to be repeated when the patch is enabled (applied)
  160. again.
  161. Third, it might take some time until the entire system is migrated
  162. when a more complex consistency model is used. The patch revert might
  163. block the livepatch module removal for too long. Therefore it is useful
  164. to revert the patch using a separate operation that might be called
  165. explicitly. But it does not make sense to remove all information
  166. until the livepatch module is really removed.
  167. 5.1. Registration
  168. -----------------
  169. Each patch first has to be registered using klp_register_patch(). This makes
  170. the patch known to the livepatch framework. Also it does some preliminary
  171. computing and checks.
  172. In particular, the patch is added into the list of known patches. The
  173. addresses of the patched functions are found according to their names.
  174. The special relocations, mentioned in the section "New functions", are
  175. applied. The relevant entries are created under
  176. /sys/kernel/livepatch/<name>. The patch is rejected when any operation
  177. fails.
  178. 5.2. Enabling
  179. -------------
  180. Registered patches might be enabled either by calling klp_enable_patch() or
  181. by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will
  182. start using the new implementation of the patched functions at this stage.
  183. In particular, if an original function is patched for the first time, a
  184. function specific struct klp_ops is created and an universal ftrace handler
  185. is registered.
  186. Functions might be patched multiple times. The ftrace handler is registered
  187. only once for the given function. Further patches just add an entry to the
  188. list (see field `func_stack`) of the struct klp_ops. The last added
  189. entry is chosen by the ftrace handler and becomes the active function
  190. replacement.
  191. Note that the patches might be enabled in a different order than they were
  192. registered.
  193. 5.3. Disabling
  194. --------------
  195. Enabled patches might get disabled either by calling klp_disable_patch() or
  196. by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage
  197. either the code from the previously enabled patch or even the original
  198. code gets used.
  199. Here all the functions (struct klp_func) associated with the to-be-disabled
  200. patch are removed from the corresponding struct klp_ops. The ftrace handler
  201. is unregistered and the struct klp_ops is freed when the func_stack list
  202. becomes empty.
  203. Patches must be disabled in exactly the reverse order in which they were
  204. enabled. It makes the problem and the implementation much easier.
  205. 5.4. Unregistration
  206. -------------------
  207. Disabled patches might be unregistered by calling klp_unregister_patch().
  208. This can be done only when the patch is disabled and the code is no longer
  209. used. It must be called before the livepatch module gets unloaded.
  210. At this stage, all the relevant sys-fs entries are removed and the patch
  211. is removed from the list of known patches.
  212. 6. Sysfs
  213. ========
  214. Information about the registered patches can be found under
  215. /sys/kernel/livepatch. The patches could be enabled and disabled
  216. by writing there.
  217. See Documentation/ABI/testing/sysfs-kernel-livepatch for more details.
  218. 7. Limitations
  219. ==============
  220. The current Livepatch implementation has several limitations:
  221. + The patch must not change the semantic of the patched functions.
  222. The current implementation guarantees only that either the old
  223. or the new function is called. The functions are patched one
  224. by one. It means that the patch must _not_ change the semantic
  225. of the function.
  226. + Data structures can not be patched.
  227. There is no support to version data structures or anyhow migrate
  228. one structure into another. Also the simple consistency model does
  229. not allow to switch more functions atomically.
  230. Once there is more complex consistency mode, it will be possible to
  231. use some workarounds. For example, it will be possible to use a hole
  232. for a new member because the data structure is aligned. Or it will
  233. be possible to use an existing member for something else.
  234. There are no plans to add more generic support for modified structures
  235. at the moment.
  236. + Only functions that can be traced could be patched.
  237. Livepatch is based on the dynamic ftrace. In particular, functions
  238. implementing ftrace or the livepatch ftrace handler could not be
  239. patched. Otherwise, the code would end up in an infinite loop. A
  240. potential mistake is prevented by marking the problematic functions
  241. by "notrace".
  242. + Anything inlined into __schedule() can not be patched.
  243. The switch_to macro is inlined into __schedule(). It switches the
  244. context between two processes in the middle of the macro. It does
  245. not save RIP in x86_64 version (contrary to 32-bit version). Instead,
  246. the currently used __schedule()/switch_to() handles both processes.
  247. Now, let's have two different tasks. One calls the original
  248. __schedule(), its registers are stored in a defined order and it
  249. goes to sleep in the switch_to macro and some other task is restored
  250. using the original __schedule(). Then there is the second task which
  251. calls patched__schedule(), it goes to sleep there and the first task
  252. is picked by the patched__schedule(). Its RSP is restored and now
  253. the registers should be restored as well. But the order is different
  254. in the new patched__schedule(), so...
  255. There is work in progress to remove this limitation.
  256. + Livepatch modules can not be removed.
  257. The current implementation just redirects the functions at the very
  258. beginning. It does not check if the functions are in use. In other
  259. words, it knows when the functions get called but it does not
  260. know when the functions return. Therefore it can not decide when
  261. the livepatch module can be safely removed.
  262. This will get most likely solved once a more complex consistency model
  263. is supported. The idea is that a safe state for patching should also
  264. mean a safe state for removing the patch.
  265. Note that the patch itself might get disabled by writing zero
  266. to /sys/kernel/livepatch/<patch>/enabled. It causes that the new
  267. code will not longer get called. But it does not guarantee
  268. that anyone is not sleeping anywhere in the new code.
  269. + Livepatch works reliably only when the dynamic ftrace is located at
  270. the very beginning of the function.
  271. The function need to be redirected before the stack or the function
  272. parameters are modified in any way. For example, livepatch requires
  273. using -fentry gcc compiler option on x86_64.
  274. One exception is the PPC port. It uses relative addressing and TOC.
  275. Each function has to handle TOC and save LR before it could call
  276. the ftrace handler. This operation has to be reverted on return.
  277. Fortunately, the generic ftrace code has the same problem and all
  278. this is is handled on the ftrace level.
  279. + Kretprobes using the ftrace framework conflict with the patched
  280. functions.
  281. Both kretprobes and livepatches use a ftrace handler that modifies
  282. the return address. The first user wins. Either the probe or the patch
  283. is rejected when the handler is already in use by the other.
  284. + Kprobes in the original function are ignored when the code is
  285. redirected to the new implementation.
  286. There is a work in progress to add warnings about this situation.