kprobes.txt 30 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801
  1. =======================
  2. Kernel Probes (Kprobes)
  3. =======================
  4. :Author: Jim Keniston <jkenisto@us.ibm.com>
  5. :Author: Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
  6. :Author: Masami Hiramatsu <mhiramat@redhat.com>
  7. .. CONTENTS
  8. 1. Concepts: Kprobes, and Return Probes
  9. 2. Architectures Supported
  10. 3. Configuring Kprobes
  11. 4. API Reference
  12. 5. Kprobes Features and Limitations
  13. 6. Probe Overhead
  14. 7. TODO
  15. 8. Kprobes Example
  16. 9. Kretprobes Example
  17. 10. Deprecated Features
  18. Appendix A: The kprobes debugfs interface
  19. Appendix B: The kprobes sysctl interface
  20. Concepts: Kprobes and Return Probes
  21. =========================================
  22. Kprobes enables you to dynamically break into any kernel routine and
  23. collect debugging and performance information non-disruptively. You
  24. can trap at almost any kernel code address [1]_, specifying a handler
  25. routine to be invoked when the breakpoint is hit.
  26. .. [1] some parts of the kernel code can not be trapped, see
  27. :ref:`kprobes_blacklist`)
  28. There are currently two types of probes: kprobes, and kretprobes
  29. (also called return probes). A kprobe can be inserted on virtually
  30. any instruction in the kernel. A return probe fires when a specified
  31. function returns.
  32. In the typical case, Kprobes-based instrumentation is packaged as
  33. a kernel module. The module's init function installs ("registers")
  34. one or more probes, and the exit function unregisters them. A
  35. registration function such as register_kprobe() specifies where
  36. the probe is to be inserted and what handler is to be called when
  37. the probe is hit.
  38. There are also ``register_/unregister_*probes()`` functions for batch
  39. registration/unregistration of a group of ``*probes``. These functions
  40. can speed up unregistration process when you have to unregister
  41. a lot of probes at once.
  42. The next four subsections explain how the different types of
  43. probes work and how jump optimization works. They explain certain
  44. things that you'll need to know in order to make the best use of
  45. Kprobes -- e.g., the difference between a pre_handler and
  46. a post_handler, and how to use the maxactive and nmissed fields of
  47. a kretprobe. But if you're in a hurry to start using Kprobes, you
  48. can skip ahead to :ref:`kprobes_archs_supported`.
  49. How Does a Kprobe Work?
  50. -----------------------
  51. When a kprobe is registered, Kprobes makes a copy of the probed
  52. instruction and replaces the first byte(s) of the probed instruction
  53. with a breakpoint instruction (e.g., int3 on i386 and x86_64).
  54. When a CPU hits the breakpoint instruction, a trap occurs, the CPU's
  55. registers are saved, and control passes to Kprobes via the
  56. notifier_call_chain mechanism. Kprobes executes the "pre_handler"
  57. associated with the kprobe, passing the handler the addresses of the
  58. kprobe struct and the saved registers.
  59. Next, Kprobes single-steps its copy of the probed instruction.
  60. (It would be simpler to single-step the actual instruction in place,
  61. but then Kprobes would have to temporarily remove the breakpoint
  62. instruction. This would open a small time window when another CPU
  63. could sail right past the probepoint.)
  64. After the instruction is single-stepped, Kprobes executes the
  65. "post_handler," if any, that is associated with the kprobe.
  66. Execution then continues with the instruction following the probepoint.
  67. Changing Execution Path
  68. -----------------------
  69. Since kprobes can probe into a running kernel code, it can change the
  70. register set, including instruction pointer. This operation requires
  71. maximum care, such as keeping the stack frame, recovering the execution
  72. path etc. Since it operates on a running kernel and needs deep knowledge
  73. of computer architecture and concurrent computing, you can easily shoot
  74. your foot.
  75. If you change the instruction pointer (and set up other related
  76. registers) in pre_handler, you must return !0 so that kprobes stops
  77. single stepping and just returns to the given address.
  78. This also means post_handler should not be called anymore.
  79. Note that this operation may be harder on some architectures which use
  80. TOC (Table of Contents) for function call, since you have to setup a new
  81. TOC for your function in your module, and recover the old one after
  82. returning from it.
  83. Return Probes
  84. -------------
  85. How Does a Return Probe Work?
  86. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  87. When you call register_kretprobe(), Kprobes establishes a kprobe at
  88. the entry to the function. When the probed function is called and this
  89. probe is hit, Kprobes saves a copy of the return address, and replaces
  90. the return address with the address of a "trampoline." The trampoline
  91. is an arbitrary piece of code -- typically just a nop instruction.
  92. At boot time, Kprobes registers a kprobe at the trampoline.
  93. When the probed function executes its return instruction, control
  94. passes to the trampoline and that probe is hit. Kprobes' trampoline
  95. handler calls the user-specified return handler associated with the
  96. kretprobe, then sets the saved instruction pointer to the saved return
  97. address, and that's where execution resumes upon return from the trap.
  98. While the probed function is executing, its return address is
  99. stored in an object of type kretprobe_instance. Before calling
  100. register_kretprobe(), the user sets the maxactive field of the
  101. kretprobe struct to specify how many instances of the specified
  102. function can be probed simultaneously. register_kretprobe()
  103. pre-allocates the indicated number of kretprobe_instance objects.
  104. For example, if the function is non-recursive and is called with a
  105. spinlock held, maxactive = 1 should be enough. If the function is
  106. non-recursive and can never relinquish the CPU (e.g., via a semaphore
  107. or preemption), NR_CPUS should be enough. If maxactive <= 0, it is
  108. set to a default value. If CONFIG_PREEMPT is enabled, the default
  109. is max(10, 2*NR_CPUS). Otherwise, the default is NR_CPUS.
  110. It's not a disaster if you set maxactive too low; you'll just miss
  111. some probes. In the kretprobe struct, the nmissed field is set to
  112. zero when the return probe is registered, and is incremented every
  113. time the probed function is entered but there is no kretprobe_instance
  114. object available for establishing the return probe.
  115. Kretprobe entry-handler
  116. ^^^^^^^^^^^^^^^^^^^^^^^
  117. Kretprobes also provides an optional user-specified handler which runs
  118. on function entry. This handler is specified by setting the entry_handler
  119. field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
  120. function entry is hit, the user-defined entry_handler, if any, is invoked.
  121. If the entry_handler returns 0 (success) then a corresponding return handler
  122. is guaranteed to be called upon function return. If the entry_handler
  123. returns a non-zero error then Kprobes leaves the return address as is, and
  124. the kretprobe has no further effect for that particular function instance.
  125. Multiple entry and return handler invocations are matched using the unique
  126. kretprobe_instance object associated with them. Additionally, a user
  127. may also specify per return-instance private data to be part of each
  128. kretprobe_instance object. This is especially useful when sharing private
  129. data between corresponding user entry and return handlers. The size of each
  130. private data object can be specified at kretprobe registration time by
  131. setting the data_size field of the kretprobe struct. This data can be
  132. accessed through the data field of each kretprobe_instance object.
  133. In case probed function is entered but there is no kretprobe_instance
  134. object available, then in addition to incrementing the nmissed count,
  135. the user entry_handler invocation is also skipped.
  136. .. _kprobes_jump_optimization:
  137. How Does Jump Optimization Work?
  138. --------------------------------
  139. If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
  140. is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
  141. the "debug.kprobes_optimization" kernel parameter is set to 1 (see
  142. sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
  143. instruction instead of a breakpoint instruction at each probepoint.
  144. Init a Kprobe
  145. ^^^^^^^^^^^^^
  146. When a probe is registered, before attempting this optimization,
  147. Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
  148. address. So, even if it's not possible to optimize this particular
  149. probepoint, there'll be a probe there.
  150. Safety Check
  151. ^^^^^^^^^^^^
  152. Before optimizing a probe, Kprobes performs the following safety checks:
  153. - Kprobes verifies that the region that will be replaced by the jump
  154. instruction (the "optimized region") lies entirely within one function.
  155. (A jump instruction is multiple bytes, and so may overlay multiple
  156. instructions.)
  157. - Kprobes analyzes the entire function and verifies that there is no
  158. jump into the optimized region. Specifically:
  159. - the function contains no indirect jump;
  160. - the function contains no instruction that causes an exception (since
  161. the fixup code triggered by the exception could jump back into the
  162. optimized region -- Kprobes checks the exception tables to verify this);
  163. - there is no near jump to the optimized region (other than to the first
  164. byte).
  165. - For each instruction in the optimized region, Kprobes verifies that
  166. the instruction can be executed out of line.
  167. Preparing Detour Buffer
  168. ^^^^^^^^^^^^^^^^^^^^^^^
  169. Next, Kprobes prepares a "detour" buffer, which contains the following
  170. instruction sequence:
  171. - code to push the CPU's registers (emulating a breakpoint trap)
  172. - a call to the trampoline code which calls user's probe handlers.
  173. - code to restore registers
  174. - the instructions from the optimized region
  175. - a jump back to the original execution path.
  176. Pre-optimization
  177. ^^^^^^^^^^^^^^^^
  178. After preparing the detour buffer, Kprobes verifies that none of the
  179. following situations exist:
  180. - The probe has a post_handler.
  181. - Other instructions in the optimized region are probed.
  182. - The probe is disabled.
  183. In any of the above cases, Kprobes won't start optimizing the probe.
  184. Since these are temporary situations, Kprobes tries to start
  185. optimizing it again if the situation is changed.
  186. If the kprobe can be optimized, Kprobes enqueues the kprobe to an
  187. optimizing list, and kicks the kprobe-optimizer workqueue to optimize
  188. it. If the to-be-optimized probepoint is hit before being optimized,
  189. Kprobes returns control to the original instruction path by setting
  190. the CPU's instruction pointer to the copied code in the detour buffer
  191. -- thus at least avoiding the single-step.
  192. Optimization
  193. ^^^^^^^^^^^^
  194. The Kprobe-optimizer doesn't insert the jump instruction immediately;
  195. rather, it calls synchronize_sched() for safety first, because it's
  196. possible for a CPU to be interrupted in the middle of executing the
  197. optimized region [3]_. As you know, synchronize_sched() can ensure
  198. that all interruptions that were active when synchronize_sched()
  199. was called are done, but only if CONFIG_PREEMPT=n. So, this version
  200. of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
  201. After that, the Kprobe-optimizer calls stop_machine() to replace
  202. the optimized region with a jump instruction to the detour buffer,
  203. using text_poke_smp().
  204. Unoptimization
  205. ^^^^^^^^^^^^^^
  206. When an optimized kprobe is unregistered, disabled, or blocked by
  207. another kprobe, it will be unoptimized. If this happens before
  208. the optimization is complete, the kprobe is just dequeued from the
  209. optimized list. If the optimization has been done, the jump is
  210. replaced with the original code (except for an int3 breakpoint in
  211. the first byte) by using text_poke_smp().
  212. .. [3] Please imagine that the 2nd instruction is interrupted and then
  213. the optimizer replaces the 2nd instruction with the jump *address*
  214. while the interrupt handler is running. When the interrupt
  215. returns to original address, there is no valid instruction,
  216. and it causes an unexpected result.
  217. .. [4] This optimization-safety checking may be replaced with the
  218. stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
  219. kernel.
  220. NOTE for geeks:
  221. The jump optimization changes the kprobe's pre_handler behavior.
  222. Without optimization, the pre_handler can change the kernel's execution
  223. path by changing regs->ip and returning 1. However, when the probe
  224. is optimized, that modification is ignored. Thus, if you want to
  225. tweak the kernel's execution path, you need to suppress optimization,
  226. using one of the following techniques:
  227. - Specify an empty function for the kprobe's post_handler.
  228. or
  229. - Execute 'sysctl -w debug.kprobes_optimization=n'
  230. .. _kprobes_blacklist:
  231. Blacklist
  232. ---------
  233. Kprobes can probe most of the kernel except itself. This means
  234. that there are some functions where kprobes cannot probe. Probing
  235. (trapping) such functions can cause a recursive trap (e.g. double
  236. fault) or the nested probe handler may never be called.
  237. Kprobes manages such functions as a blacklist.
  238. If you want to add a function into the blacklist, you just need
  239. to (1) include linux/kprobes.h and (2) use NOKPROBE_SYMBOL() macro
  240. to specify a blacklisted function.
  241. Kprobes checks the given probe address against the blacklist and
  242. rejects registering it, if the given address is in the blacklist.
  243. .. _kprobes_archs_supported:
  244. Architectures Supported
  245. =======================
  246. Kprobes and return probes are implemented on the following
  247. architectures:
  248. - i386 (Supports jump optimization)
  249. - x86_64 (AMD-64, EM64T) (Supports jump optimization)
  250. - ppc64
  251. - ia64 (Does not support probes on instruction slot1.)
  252. - sparc64 (Return probes not yet implemented.)
  253. - arm
  254. - ppc
  255. - mips
  256. - s390
  257. Configuring Kprobes
  258. ===================
  259. When configuring the kernel using make menuconfig/xconfig/oldconfig,
  260. ensure that CONFIG_KPROBES is set to "y". Under "General setup", look
  261. for "Kprobes".
  262. So that you can load and unload Kprobes-based instrumentation modules,
  263. make sure "Loadable module support" (CONFIG_MODULES) and "Module
  264. unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
  265. Also make sure that CONFIG_KALLSYMS and perhaps even CONFIG_KALLSYMS_ALL
  266. are set to "y", since kallsyms_lookup_name() is used by the in-kernel
  267. kprobe address resolution code.
  268. If you need to insert a probe in the middle of a function, you may find
  269. it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
  270. so you can use "objdump -d -l vmlinux" to see the source-to-object
  271. code mapping.
  272. API Reference
  273. =============
  274. The Kprobes API includes a "register" function and an "unregister"
  275. function for each type of probe. The API also includes "register_*probes"
  276. and "unregister_*probes" functions for (un)registering arrays of probes.
  277. Here are terse, mini-man-page specifications for these functions and
  278. the associated probe handlers that you'll write. See the files in the
  279. samples/kprobes/ sub-directory for examples.
  280. register_kprobe
  281. ---------------
  282. ::
  283. #include <linux/kprobes.h>
  284. int register_kprobe(struct kprobe *kp);
  285. Sets a breakpoint at the address kp->addr. When the breakpoint is
  286. hit, Kprobes calls kp->pre_handler. After the probed instruction
  287. is single-stepped, Kprobe calls kp->post_handler. If a fault
  288. occurs during execution of kp->pre_handler or kp->post_handler,
  289. or during single-stepping of the probed instruction, Kprobes calls
  290. kp->fault_handler. Any or all handlers can be NULL. If kp->flags
  291. is set KPROBE_FLAG_DISABLED, that kp will be registered but disabled,
  292. so, its handlers aren't hit until calling enable_kprobe(kp).
  293. .. note::
  294. 1. With the introduction of the "symbol_name" field to struct kprobe,
  295. the probepoint address resolution will now be taken care of by the kernel.
  296. The following will now work::
  297. kp.symbol_name = "symbol_name";
  298. (64-bit powerpc intricacies such as function descriptors are handled
  299. transparently)
  300. 2. Use the "offset" field of struct kprobe if the offset into the symbol
  301. to install a probepoint is known. This field is used to calculate the
  302. probepoint.
  303. 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
  304. specified, kprobe registration will fail with -EINVAL.
  305. 4. With CISC architectures (such as i386 and x86_64), the kprobes code
  306. does not validate if the kprobe.addr is at an instruction boundary.
  307. Use "offset" with caution.
  308. register_kprobe() returns 0 on success, or a negative errno otherwise.
  309. User's pre-handler (kp->pre_handler)::
  310. #include <linux/kprobes.h>
  311. #include <linux/ptrace.h>
  312. int pre_handler(struct kprobe *p, struct pt_regs *regs);
  313. Called with p pointing to the kprobe associated with the breakpoint,
  314. and regs pointing to the struct containing the registers saved when
  315. the breakpoint was hit. Return 0 here unless you're a Kprobes geek.
  316. User's post-handler (kp->post_handler)::
  317. #include <linux/kprobes.h>
  318. #include <linux/ptrace.h>
  319. void post_handler(struct kprobe *p, struct pt_regs *regs,
  320. unsigned long flags);
  321. p and regs are as described for the pre_handler. flags always seems
  322. to be zero.
  323. User's fault-handler (kp->fault_handler)::
  324. #include <linux/kprobes.h>
  325. #include <linux/ptrace.h>
  326. int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
  327. p and regs are as described for the pre_handler. trapnr is the
  328. architecture-specific trap number associated with the fault (e.g.,
  329. on i386, 13 for a general protection fault or 14 for a page fault).
  330. Returns 1 if it successfully handled the exception.
  331. register_kretprobe
  332. ------------------
  333. ::
  334. #include <linux/kprobes.h>
  335. int register_kretprobe(struct kretprobe *rp);
  336. Establishes a return probe for the function whose address is
  337. rp->kp.addr. When that function returns, Kprobes calls rp->handler.
  338. You must set rp->maxactive appropriately before you call
  339. register_kretprobe(); see "How Does a Return Probe Work?" for details.
  340. register_kretprobe() returns 0 on success, or a negative errno
  341. otherwise.
  342. User's return-probe handler (rp->handler)::
  343. #include <linux/kprobes.h>
  344. #include <linux/ptrace.h>
  345. int kretprobe_handler(struct kretprobe_instance *ri,
  346. struct pt_regs *regs);
  347. regs is as described for kprobe.pre_handler. ri points to the
  348. kretprobe_instance object, of which the following fields may be
  349. of interest:
  350. - ret_addr: the return address
  351. - rp: points to the corresponding kretprobe object
  352. - task: points to the corresponding task struct
  353. - data: points to per return-instance private data; see "Kretprobe
  354. entry-handler" for details.
  355. The regs_return_value(regs) macro provides a simple abstraction to
  356. extract the return value from the appropriate register as defined by
  357. the architecture's ABI.
  358. The handler's return value is currently ignored.
  359. unregister_*probe
  360. ------------------
  361. ::
  362. #include <linux/kprobes.h>
  363. void unregister_kprobe(struct kprobe *kp);
  364. void unregister_kretprobe(struct kretprobe *rp);
  365. Removes the specified probe. The unregister function can be called
  366. at any time after the probe has been registered.
  367. .. note::
  368. If the functions find an incorrect probe (ex. an unregistered probe),
  369. they clear the addr field of the probe.
  370. register_*probes
  371. ----------------
  372. ::
  373. #include <linux/kprobes.h>
  374. int register_kprobes(struct kprobe **kps, int num);
  375. int register_kretprobes(struct kretprobe **rps, int num);
  376. Registers each of the num probes in the specified array. If any
  377. error occurs during registration, all probes in the array, up to
  378. the bad probe, are safely unregistered before the register_*probes
  379. function returns.
  380. - kps/rps: an array of pointers to ``*probe`` data structures
  381. - num: the number of the array entries.
  382. .. note::
  383. You have to allocate(or define) an array of pointers and set all
  384. of the array entries before using these functions.
  385. unregister_*probes
  386. ------------------
  387. ::
  388. #include <linux/kprobes.h>
  389. void unregister_kprobes(struct kprobe **kps, int num);
  390. void unregister_kretprobes(struct kretprobe **rps, int num);
  391. Removes each of the num probes in the specified array at once.
  392. .. note::
  393. If the functions find some incorrect probes (ex. unregistered
  394. probes) in the specified array, they clear the addr field of those
  395. incorrect probes. However, other probes in the array are
  396. unregistered correctly.
  397. disable_*probe
  398. --------------
  399. ::
  400. #include <linux/kprobes.h>
  401. int disable_kprobe(struct kprobe *kp);
  402. int disable_kretprobe(struct kretprobe *rp);
  403. Temporarily disables the specified ``*probe``. You can enable it again by using
  404. enable_*probe(). You must specify the probe which has been registered.
  405. enable_*probe
  406. -------------
  407. ::
  408. #include <linux/kprobes.h>
  409. int enable_kprobe(struct kprobe *kp);
  410. int enable_kretprobe(struct kretprobe *rp);
  411. Enables ``*probe`` which has been disabled by disable_*probe(). You must specify
  412. the probe which has been registered.
  413. Kprobes Features and Limitations
  414. ================================
  415. Kprobes allows multiple probes at the same address. Also,
  416. a probepoint for which there is a post_handler cannot be optimized.
  417. So if you install a kprobe with a post_handler, at an optimized
  418. probepoint, the probepoint will be unoptimized automatically.
  419. In general, you can install a probe anywhere in the kernel.
  420. In particular, you can probe interrupt handlers. Known exceptions
  421. are discussed in this section.
  422. The register_*probe functions will return -EINVAL if you attempt
  423. to install a probe in the code that implements Kprobes (mostly
  424. kernel/kprobes.c and ``arch/*/kernel/kprobes.c``, but also functions such
  425. as do_page_fault and notifier_call_chain).
  426. If you install a probe in an inline-able function, Kprobes makes
  427. no attempt to chase down all inline instances of the function and
  428. install probes there. gcc may inline a function without being asked,
  429. so keep this in mind if you're not seeing the probe hits you expect.
  430. A probe handler can modify the environment of the probed function
  431. -- e.g., by modifying kernel data structures, or by modifying the
  432. contents of the pt_regs struct (which are restored to the registers
  433. upon return from the breakpoint). So Kprobes can be used, for example,
  434. to install a bug fix or to inject faults for testing. Kprobes, of
  435. course, has no way to distinguish the deliberately injected faults
  436. from the accidental ones. Don't drink and probe.
  437. Kprobes makes no attempt to prevent probe handlers from stepping on
  438. each other -- e.g., probing printk() and then calling printk() from a
  439. probe handler. If a probe handler hits a probe, that second probe's
  440. handlers won't be run in that instance, and the kprobe.nmissed member
  441. of the second probe will be incremented.
  442. As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
  443. the same handler) may run concurrently on different CPUs.
  444. Kprobes does not use mutexes or allocate memory except during
  445. registration and unregistration.
  446. Probe handlers are run with preemption disabled or interrupt disabled,
  447. which depends on the architecture and optimization state. (e.g.,
  448. kretprobe handlers and optimized kprobe handlers run without interrupt
  449. disabled on x86/x86-64). In any case, your handler should not yield
  450. the CPU (e.g., by attempting to acquire a semaphore, or waiting I/O).
  451. Since a return probe is implemented by replacing the return
  452. address with the trampoline's address, stack backtraces and calls
  453. to __builtin_return_address() will typically yield the trampoline's
  454. address instead of the real return address for kretprobed functions.
  455. (As far as we can tell, __builtin_return_address() is used only
  456. for instrumentation and error reporting.)
  457. If the number of times a function is called does not match the number
  458. of times it returns, registering a return probe on that function may
  459. produce undesirable results. In such a case, a line:
  460. kretprobe BUG!: Processing kretprobe d000000000041aa8 @ c00000000004f48c
  461. gets printed. With this information, one will be able to correlate the
  462. exact instance of the kretprobe that caused the problem. We have the
  463. do_exit() case covered. do_execve() and do_fork() are not an issue.
  464. We're unaware of other specific cases where this could be a problem.
  465. If, upon entry to or exit from a function, the CPU is running on
  466. a stack other than that of the current task, registering a return
  467. probe on that function may produce undesirable results. For this
  468. reason, Kprobes doesn't support return probes (or kprobes)
  469. on the x86_64 version of __switch_to(); the registration functions
  470. return -EINVAL.
  471. On x86/x86-64, since the Jump Optimization of Kprobes modifies
  472. instructions widely, there are some limitations to optimization. To
  473. explain it, we introduce some terminology. Imagine a 3-instruction
  474. sequence consisting of a two 2-byte instructions and one 3-byte
  475. instruction.
  476. ::
  477. IA
  478. |
  479. [-2][-1][0][1][2][3][4][5][6][7]
  480. [ins1][ins2][ ins3 ]
  481. [<- DCR ->]
  482. [<- JTPR ->]
  483. ins1: 1st Instruction
  484. ins2: 2nd Instruction
  485. ins3: 3rd Instruction
  486. IA: Insertion Address
  487. JTPR: Jump Target Prohibition Region
  488. DCR: Detoured Code Region
  489. The instructions in DCR are copied to the out-of-line buffer
  490. of the kprobe, because the bytes in DCR are replaced by
  491. a 5-byte jump instruction. So there are several limitations.
  492. a) The instructions in DCR must be relocatable.
  493. b) The instructions in DCR must not include a call instruction.
  494. c) JTPR must not be targeted by any jump or call instruction.
  495. d) DCR must not straddle the border between functions.
  496. Anyway, these limitations are checked by the in-kernel instruction
  497. decoder, so you don't need to worry about that.
  498. Probe Overhead
  499. ==============
  500. On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
  501. microseconds to process. Specifically, a benchmark that hits the same
  502. probepoint repeatedly, firing a simple handler each time, reports 1-2
  503. million hits per second, depending on the architecture. A return-probe
  504. hit typically takes 50-75% longer than a kprobe hit.
  505. When you have a return probe set on a function, adding a kprobe at
  506. the entry to that function adds essentially no overhead.
  507. Here are sample overhead figures (in usec) for different architectures::
  508. k = kprobe; r = return probe; kr = kprobe + return probe
  509. on same function
  510. i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
  511. k = 0.57 usec; r = 0.92; kr = 0.99
  512. x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
  513. k = 0.49 usec; r = 0.80; kr = 0.82
  514. ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
  515. k = 0.77 usec; r = 1.26; kr = 1.45
  516. Optimized Probe Overhead
  517. ------------------------
  518. Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
  519. process. Here are sample overhead figures (in usec) for x86 architectures::
  520. k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
  521. r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
  522. i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
  523. k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
  524. x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
  525. k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
  526. TODO
  527. ====
  528. a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
  529. programming interface for probe-based instrumentation. Try it out.
  530. b. Kernel return probes for sparc64.
  531. c. Support for other architectures.
  532. d. User-space probes.
  533. e. Watchpoint probes (which fire on data references).
  534. Kprobes Example
  535. ===============
  536. See samples/kprobes/kprobe_example.c
  537. Kretprobes Example
  538. ==================
  539. See samples/kprobes/kretprobe_example.c
  540. For additional information on Kprobes, refer to the following URLs:
  541. - http://www-106.ibm.com/developerworks/library/l-kprobes.html?ca=dgr-lnxw42Kprobe
  542. - http://www.redhat.com/magazine/005mar05/features/kprobes/
  543. - http://www-users.cs.umn.edu/~boutcher/kprobes/
  544. - http://www.linuxsymposium.org/2006/linuxsymposium_procv2.pdf (pages 101-115)
  545. Deprecated Features
  546. ===================
  547. Jprobes is now a deprecated feature. People who are depending on it should
  548. migrate to other tracing features or use older kernels. Please consider to
  549. migrate your tool to one of the following options:
  550. - Use trace-event to trace target function with arguments.
  551. trace-event is a low-overhead (and almost no visible overhead if it
  552. is off) statically defined event interface. You can define new events
  553. and trace it via ftrace or any other tracing tools.
  554. See the following urls:
  555. - https://lwn.net/Articles/379903/
  556. - https://lwn.net/Articles/381064/
  557. - https://lwn.net/Articles/383362/
  558. - Use ftrace dynamic events (kprobe event) with perf-probe.
  559. If you build your kernel with debug info (CONFIG_DEBUG_INFO=y), you can
  560. find which register/stack is assigned to which local variable or arguments
  561. by using perf-probe and set up new event to trace it.
  562. See following documents:
  563. - Documentation/trace/kprobetrace.rst
  564. - Documentation/trace/events.rst
  565. - tools/perf/Documentation/perf-probe.txt
  566. The kprobes debugfs interface
  567. =============================
  568. With recent kernels (> 2.6.20) the list of registered kprobes is visible
  569. under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at //sys/kernel/debug).
  570. /sys/kernel/debug/kprobes/list: Lists all registered probes on the system::
  571. c015d71a k vfs_read+0x0
  572. c03dedc5 r tcp_v4_rcv+0x0
  573. The first column provides the kernel address where the probe is inserted.
  574. The second column identifies the type of probe (k - kprobe and r - kretprobe)
  575. while the third column specifies the symbol+offset of the probe.
  576. If the probed function belongs to a module, the module name is also
  577. specified. Following columns show probe status. If the probe is on
  578. a virtual address that is no longer valid (module init sections, module
  579. virtual addresses that correspond to modules that've been unloaded),
  580. such probes are marked with [GONE]. If the probe is temporarily disabled,
  581. such probes are marked with [DISABLED]. If the probe is optimized, it is
  582. marked with [OPTIMIZED]. If the probe is ftrace-based, it is marked with
  583. [FTRACE].
  584. /sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
  585. Provides a knob to globally and forcibly turn registered kprobes ON or OFF.
  586. By default, all kprobes are enabled. By echoing "0" to this file, all
  587. registered probes will be disarmed, till such time a "1" is echoed to this
  588. file. Note that this knob just disarms and arms all kprobes and doesn't
  589. change each probe's disabling state. This means that disabled kprobes (marked
  590. [DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
  591. The kprobes sysctl interface
  592. ============================
  593. /proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
  594. When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
  595. a knob to globally and forcibly turn jump optimization (see section
  596. :ref:`kprobes_jump_optimization`) ON or OFF. By default, jump optimization
  597. is allowed (ON). If you echo "0" to this file or set
  598. "debug.kprobes_optimization" to 0 via sysctl, all optimized probes will be
  599. unoptimized, and any new probes registered after that will not be optimized.
  600. Note that this knob *changes* the optimized state. This means that optimized
  601. probes (marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
  602. removed). If the knob is turned on, they will be optimized again.