unaligned-memory-access.txt 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263
  1. UNALIGNED MEMORY ACCESSES
  2. =========================
  3. Linux runs on a wide variety of architectures which have varying behaviour
  4. when it comes to memory access. This document presents some details about
  5. unaligned accesses, why you need to write code that doesn't cause them,
  6. and how to write such code!
  7. The definition of an unaligned access
  8. =====================================
  9. Unaligned memory accesses occur when you try to read N bytes of data starting
  10. from an address that is not evenly divisible by N (i.e. addr % N != 0).
  11. For example, reading 4 bytes of data from address 0x10004 is fine, but
  12. reading 4 bytes of data from address 0x10005 would be an unaligned memory
  13. access.
  14. The above may seem a little vague, as memory access can happen in different
  15. ways. The context here is at the machine code level: certain instructions read
  16. or write a number of bytes to or from memory (e.g. movb, movw, movl in x86
  17. assembly). As will become clear, it is relatively easy to spot C statements
  18. which will compile to multiple-byte memory access instructions, namely when
  19. dealing with types such as u16, u32 and u64.
  20. Natural alignment
  21. =================
  22. The rule mentioned above forms what we refer to as natural alignment:
  23. When accessing N bytes of memory, the base memory address must be evenly
  24. divisible by N, i.e. addr % N == 0.
  25. When writing code, assume the target architecture has natural alignment
  26. requirements.
  27. In reality, only a few architectures require natural alignment on all sizes
  28. of memory access. However, we must consider ALL supported architectures;
  29. writing code that satisfies natural alignment requirements is the easiest way
  30. to achieve full portability.
  31. Why unaligned access is bad
  32. ===========================
  33. The effects of performing an unaligned memory access vary from architecture
  34. to architecture. It would be easy to write a whole document on the differences
  35. here; a summary of the common scenarios is presented below:
  36. - Some architectures are able to perform unaligned memory accesses
  37. transparently, but there is usually a significant performance cost.
  38. - Some architectures raise processor exceptions when unaligned accesses
  39. happen. The exception handler is able to correct the unaligned access,
  40. at significant cost to performance.
  41. - Some architectures raise processor exceptions when unaligned accesses
  42. happen, but the exceptions do not contain enough information for the
  43. unaligned access to be corrected.
  44. - Some architectures are not capable of unaligned memory access, but will
  45. silently perform a different memory access to the one that was requested,
  46. resulting in a subtle code bug that is hard to detect!
  47. It should be obvious from the above that if your code causes unaligned
  48. memory accesses to happen, your code will not work correctly on certain
  49. platforms and will cause performance problems on others.
  50. Code that does not cause unaligned access
  51. =========================================
  52. At first, the concepts above may seem a little hard to relate to actual
  53. coding practice. After all, you don't have a great deal of control over
  54. memory addresses of certain variables, etc.
  55. Fortunately things are not too complex, as in most cases, the compiler
  56. ensures that things will work for you. For example, take the following
  57. structure:
  58. struct foo {
  59. u16 field1;
  60. u32 field2;
  61. u8 field3;
  62. };
  63. Let us assume that an instance of the above structure resides in memory
  64. starting at address 0x10000. With a basic level of understanding, it would
  65. not be unreasonable to expect that accessing field2 would cause an unaligned
  66. access. You'd be expecting field2 to be located at offset 2 bytes into the
  67. structure, i.e. address 0x10002, but that address is not evenly divisible
  68. by 4 (remember, we're reading a 4 byte value here).
  69. Fortunately, the compiler understands the alignment constraints, so in the
  70. above case it would insert 2 bytes of padding in between field1 and field2.
  71. Therefore, for standard structure types you can always rely on the compiler
  72. to pad structures so that accesses to fields are suitably aligned (assuming
  73. you do not cast the field to a type of different length).
  74. Similarly, you can also rely on the compiler to align variables and function
  75. parameters to a naturally aligned scheme, based on the size of the type of
  76. the variable.
  77. At this point, it should be clear that accessing a single byte (u8 or char)
  78. will never cause an unaligned access, because all memory addresses are evenly
  79. divisible by one.
  80. On a related topic, with the above considerations in mind you may observe
  81. that you could reorder the fields in the structure in order to place fields
  82. where padding would otherwise be inserted, and hence reduce the overall
  83. resident memory size of structure instances. The optimal layout of the
  84. above example is:
  85. struct foo {
  86. u32 field2;
  87. u16 field1;
  88. u8 field3;
  89. };
  90. For a natural alignment scheme, the compiler would only have to add a single
  91. byte of padding at the end of the structure. This padding is added in order
  92. to satisfy alignment constraints for arrays of these structures.
  93. Another point worth mentioning is the use of __attribute__((packed)) on a
  94. structure type. This GCC-specific attribute tells the compiler never to
  95. insert any padding within structures, useful when you want to use a C struct
  96. to represent some data that comes in a fixed arrangement 'off the wire'.
  97. You might be inclined to believe that usage of this attribute can easily
  98. lead to unaligned accesses when accessing fields that do not satisfy
  99. architectural alignment requirements. However, again, the compiler is aware
  100. of the alignment constraints and will generate extra instructions to perform
  101. the memory access in a way that does not cause unaligned access. Of course,
  102. the extra instructions obviously cause a loss in performance compared to the
  103. non-packed case, so the packed attribute should only be used when avoiding
  104. structure padding is of importance.
  105. Code that causes unaligned access
  106. =================================
  107. With the above in mind, let's move onto a real life example of a function
  108. that can cause an unaligned memory access. The following function taken
  109. from include/linux/etherdevice.h is an optimized routine to compare two
  110. ethernet MAC addresses for equality.
  111. bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
  112. {
  113. #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
  114. u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) |
  115. ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4)));
  116. return fold == 0;
  117. #else
  118. const u16 *a = (const u16 *)addr1;
  119. const u16 *b = (const u16 *)addr2;
  120. return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) != 0;
  121. #endif
  122. }
  123. In the above function, when the hardware has efficient unaligned access
  124. capability, there is no issue with this code. But when the hardware isn't
  125. able to access memory on arbitrary boundaries, the reference to a[0] causes
  126. 2 bytes (16 bits) to be read from memory starting at address addr1.
  127. Think about what would happen if addr1 was an odd address such as 0x10003.
  128. (Hint: it'd be an unaligned access.)
  129. Despite the potential unaligned access problems with the above function, it
  130. is included in the kernel anyway but is understood to only work normally on
  131. 16-bit-aligned addresses. It is up to the caller to ensure this alignment or
  132. not use this function at all. This alignment-unsafe function is still useful
  133. as it is a decent optimization for the cases when you can ensure alignment,
  134. which is true almost all of the time in ethernet networking context.
  135. Here is another example of some code that could cause unaligned accesses:
  136. void myfunc(u8 *data, u32 value)
  137. {
  138. [...]
  139. *((u32 *) data) = cpu_to_le32(value);
  140. [...]
  141. }
  142. This code will cause unaligned accesses every time the data parameter points
  143. to an address that is not evenly divisible by 4.
  144. In summary, the 2 main scenarios where you may run into unaligned access
  145. problems involve:
  146. 1. Casting variables to types of different lengths
  147. 2. Pointer arithmetic followed by access to at least 2 bytes of data
  148. Avoiding unaligned accesses
  149. ===========================
  150. The easiest way to avoid unaligned access is to use the get_unaligned() and
  151. put_unaligned() macros provided by the <asm/unaligned.h> header file.
  152. Going back to an earlier example of code that potentially causes unaligned
  153. access:
  154. void myfunc(u8 *data, u32 value)
  155. {
  156. [...]
  157. *((u32 *) data) = cpu_to_le32(value);
  158. [...]
  159. }
  160. To avoid the unaligned memory access, you would rewrite it as follows:
  161. void myfunc(u8 *data, u32 value)
  162. {
  163. [...]
  164. value = cpu_to_le32(value);
  165. put_unaligned(value, (u32 *) data);
  166. [...]
  167. }
  168. The get_unaligned() macro works similarly. Assuming 'data' is a pointer to
  169. memory and you wish to avoid unaligned access, its usage is as follows:
  170. u32 value = get_unaligned((u32 *) data);
  171. These macros work for memory accesses of any length (not just 32 bits as
  172. in the examples above). Be aware that when compared to standard access of
  173. aligned memory, using these macros to access unaligned memory can be costly in
  174. terms of performance.
  175. If use of such macros is not convenient, another option is to use memcpy(),
  176. where the source or destination (or both) are of type u8* or unsigned char*.
  177. Due to the byte-wise nature of this operation, unaligned accesses are avoided.
  178. Alignment vs. Networking
  179. ========================
  180. On architectures that require aligned loads, networking requires that the IP
  181. header is aligned on a four-byte boundary to optimise the IP stack. For
  182. regular ethernet hardware, the constant NET_IP_ALIGN is used. On most
  183. architectures this constant has the value 2 because the normal ethernet
  184. header is 14 bytes long, so in order to get proper alignment one needs to
  185. DMA to an address which can be expressed as 4*n + 2. One notable exception
  186. here is powerpc which defines NET_IP_ALIGN to 0 because DMA to unaligned
  187. addresses can be very expensive and dwarf the cost of unaligned loads.
  188. For some ethernet hardware that cannot DMA to unaligned addresses like
  189. 4*n+2 or non-ethernet hardware, this can be a problem, and it is then
  190. required to copy the incoming frame into an aligned buffer. Because this is
  191. unnecessary on architectures that can do unaligned accesses, the code can be
  192. made dependent on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS like so:
  193. #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
  194. skb = original skb
  195. #else
  196. skb = copy skb
  197. #endif
  198. --
  199. Authors: Daniel Drake <dsd@gentoo.org>,
  200. Johannes Berg <johannes@sipsolutions.net>
  201. With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt,
  202. Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz,
  203. Vadim Lobanov