padding-spec.txt 26 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550
  1. Tor Padding Specification
  2. Mike Perry, George Kadianakis
  3. Note: This is an attempt to specify Tor as currently implemented. Future
  4. versions of Tor will implement improved algorithms.
  5. This document tries to cover how Tor chooses to use cover traffic to obscure
  6. various traffic patterns from external and internal observers. Other
  7. implementations MAY take other approaches, but implementors should be aware of
  8. the anonymity and load-balancing implications of their choices.
  9. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
  10. NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
  11. "OPTIONAL" in this document are to be interpreted as described in
  12. RFC 2119.
  13. 1. Overview
  14. Tor supports two classes of cover traffic: connection-level padding, and
  15. circuit-level padding.
  16. Connection-level padding uses the CELL_PADDING cell command for cover
  17. traffic, where as circuit-level padding uses the RELAY_COMMAND_DROP relay
  18. command. CELL_PADDING is single-hop only and can be differentiated from
  19. normal traffic by Tor relays ("internal" observers), but not by entities
  20. monitoring Tor OR connections ("external" observers).
  21. RELAY_COMMAND_DROP is multi-hop, and is not visible to intermediate Tor
  22. relays, because the relay command field is covered by circuit layer
  23. encryption. Moreover, Tor's 'recognized' field allows RELAY_COMMAND_DROP
  24. padding to be sent to any intermediate node in a circuit (as per Section
  25. 6.1 of tor-spec.txt).
  26. Tor uses both connection level and circuit level padding. Connection
  27. level padding is described in section 2. Circuit level padding is
  28. described in section 3.
  29. The circuit-level padding system is completely orthogonal to the
  30. connection-level padding. The connection-level padding system regards
  31. circuit-level padding as normal data traffic, and hence the connection-level
  32. padding system will not add any additional overhead while the circuit-level
  33. padding system is actively padding.
  34. 2. Connection-level padding
  35. 2.1. Background
  36. Tor clients and relays make use of CELL_PADDING to reduce the resolution of
  37. connection-level metadata retention by ISPs and surveillance infrastructure.
  38. Such metadata retention is implemented by Internet routers in the form of
  39. Netflow, jFlow, Netstream, or IPFIX records. These records are emitted by
  40. gateway routers in a raw form and then exported (often over plaintext) to a
  41. "collector" that either records them verbatim, or reduces their granularity
  42. further[1].
  43. Netflow records and the associated data collection and retention tools are
  44. very configurable, and have many modes of operation, especially when
  45. configured to handle high throughput. However, at ISP scale, per-flow records
  46. are very likely to be employed, since they are the default, and also provide
  47. very high resolution in terms of endpoint activity, second only to full packet
  48. and/or header capture.
  49. Per-flow records record the endpoint connection 5-tuple, as well as the
  50. total number of bytes sent and received by that 5-tuple during a particular
  51. time period. They can store additional fields as well, but it is primarily
  52. timing and bytecount information that concern us.
  53. When configured to provide per-flow data, routers emit these raw flow
  54. records periodically for all active connections passing through them
  55. based on two parameters: the "active flow timeout" and the "inactive
  56. flow timeout".
  57. The "active flow timeout" causes the router to emit a new record
  58. periodically for every active TCP session that continuously sends data. The
  59. default active flow timeout for most routers is 30 minutes, meaning that a
  60. new record is created for every TCP session at least every 30 minutes, no
  61. matter what. This value can be configured from 1 minute to 60 minutes on
  62. major routers.
  63. The "inactive flow timeout" is used by routers to create a new record if a
  64. TCP session is inactive for some number of seconds. It allows routers to
  65. avoid the need to track a large number of idle connections in memory, and
  66. instead emit a separate record only when there is activity. This value
  67. ranges from 10 seconds to 600 seconds on common routers. It appears as
  68. though no routers support a value lower than 10 seconds.
  69. For reference, here are default values and ranges (in parenthesis when
  70. known) for common routers, along with citations to their manuals.
  71. Some routers speak other collection protocols than Netflow, and in the
  72. case of Juniper, use different timeouts for these protocols. Where this
  73. is known to happen, it has been noted.
  74. Inactive Timeout Active Timeout
  75. Cisco IOS[3] 15s (10-600s) 30min (1-60min)
  76. Cisco Catalyst[4] 5min 32min
  77. Juniper (jFlow)[5] 15s (10-600s) 30min (1-60min)
  78. Juniper (Netflow)[6,7] 60s (10-600s) 30min (1-30min)
  79. H3C (Netstream)[8] 60s (60-600s) 30min (1-60min)
  80. Fortinet[9] 15s 30min
  81. MicroTik[10] 15s 30min
  82. nProbe[14] 30s 120s
  83. Alcatel-Lucent[2] 15s (10-600s) 30min (1-600min)
  84. The combination of the active and inactive netflow record timeouts allow us
  85. to devise a low-cost padding defense that causes what would otherwise be
  86. split records to "collapse" at the router even before they are exported to
  87. the collector for storage. So long as a connection transmits data before the
  88. "inactive flow timeout" expires, then the router will continue to count the
  89. total bytes on that flow before finally emitting a record at the "active
  90. flow timeout".
  91. This means that for a minimal amount of padding that prevents the "inactive
  92. flow timeout" from expiring, it is possible to reduce the resolution of raw
  93. per-flow netflow data to the total amount of bytes send and received in a 30
  94. minute window. This is a vast reduction in resolution for HTTP, IRC, XMPP,
  95. SSH, and other intermittent interactive traffic, especially when all
  96. user traffic in that time period is multiplexed over a single connection
  97. (as it is with Tor).
  98. 2.2. Implementation
  99. Tor clients currently maintain one TLS connection to their Guard node to
  100. carry actual application traffic, and make up to 3 additional connections to
  101. other nodes to retrieve directory information.
  102. We pad only the client's connection to the Guard node, and not any other
  103. connection. We treat Bridge node connections to the Tor network as client
  104. connections, and pad them, but otherwise not pad between normal relays.
  105. Both clients and Guards will maintain a timer for all application (ie:
  106. non-directory) TLS connections. Every time a non-padding packet is sent or
  107. received by either end, that endpoint will sample a timeout value from
  108. between 1.5 seconds and 9.5 seconds using the max(X,X) distribution
  109. described in Section 2.3. The time range is subject to consensus
  110. parameters as specified in Section 2.6.
  111. If the connection becomes active for any reason before this timer
  112. expires, the timer is reset to a new random value between 1.5 and 9.5
  113. seconds. If the connection remains inactive until the timer expires, a
  114. single CELL_PADDING cell will be sent on that connection.
  115. In this way, the connection will only be padded in the event that it is
  116. idle, and will always transmit a packet before the minimum 10 second inactive
  117. timeout.
  118. 2.3. Padding Cell Timeout Distribution Statistics
  119. It turns out that because the padding is bidirectional, and because both
  120. endpoints are maintaining timers, this creates the situation where the time
  121. before sending a padding packet in either direction is actually
  122. min(client_timeout, server_timeout).
  123. If client_timeout and server_timeout are uniformly sampled, then the
  124. distribution of min(client_timeout,server_timeout) is no longer uniform, and
  125. the resulting average timeout (Exp[min(X,X)]) is much lower than the
  126. midpoint of the timeout range.
  127. To compensate for this, instead of sampling each endpoint timeout uniformly,
  128. we instead sample it from max(X,X), where X is uniformly distributed.
  129. If X is a random variable uniform from 0..R-1 (where R=high-low), then the
  130. random variable Y = max(X,X) has Prob(Y == i) = (2.0*i + 1)/(R*R).
  131. Then, when both sides apply timeouts sampled from Y, the resulting
  132. bidirectional padding packet rate is now a third random variable:
  133. Z = min(Y,Y).
  134. The distribution of Z is slightly bell-shaped, but mostly flat around the
  135. mean. It also turns out that Exp[Z] ~= Exp[X]. Here's a table of average
  136. values for each random variable:
  137. R Exp[X] Exp[Z] Exp[min(X,X)] Exp[Y=max(X,X)]
  138. 2000 999.5 1066 666.2 1332.8
  139. 3000 1499.5 1599.5 999.5 1999.5
  140. 5000 2499.5 2666 1666.2 3332.8
  141. 6000 2999.5 3199.5 1999.5 3999.5
  142. 7000 3499.5 3732.8 2332.8 4666.2
  143. 8000 3999.5 4266.2 2666.2 5332.8
  144. 10000 4999.5 5328 3332.8 6666.2
  145. 15000 7499.5 7995 4999.5 9999.5
  146. 20000 9900.5 10661 6666.2 13332.8
  147. In this way, we maintain the property that the midpoint of the timeout range
  148. is the expected mean time before a padding packet is sent in either
  149. direction.
  150. 2.4. Maximum overhead bounds
  151. With the default parameters and the above distribution, we expect a
  152. padded connection to send one padding cell every 5.5 seconds. This
  153. averages to 103 bytes per second full duplex (~52 bytes/sec in each
  154. direction), assuming a 512 byte cell and 55 bytes of TLS+TCP+IP headers.
  155. For a client connection that remains otherwise idle for its expected
  156. ~50 minute lifespan (governed by the circuit available timeout plus a
  157. small additional connection timeout), this is about 154.5KB of overhead
  158. in each direction (309KB total).
  159. With 2.5M completely idle clients connected simultaneously, 52 bytes per
  160. second amounts to 130MB/second in each direction network-wide, which is
  161. roughly the current amount of Tor directory traffic[11]. Of course, our
  162. 2.5M daily users will neither be connected simultaneously, nor entirely
  163. idle, so we expect the actual overhead to be much lower than this.
  164. 2.5. Reducing or Disabling Padding via Negotiation
  165. To allow mobile clients to either disable or reduce their padding overhead,
  166. the CELL_PADDING_NEGOTIATE cell (tor-spec.txt section 7.2) may be sent from
  167. clients to relays. This cell is used to instruct relays to cease sending
  168. padding.
  169. If the client has opted to use reduced padding, it continues to send
  170. padding cells sampled from the range [9000,14000] milliseconds (subject to
  171. consensus parameter alteration as per Section 2.6), still using the
  172. Y=max(X,X) distribution. Since the padding is now unidirectional, the
  173. expected frequency of padding cells is now governed by the Y distribution
  174. above as opposed to Z. For a range of 5000ms, we can see that we expect to
  175. send a padding packet every 9000+3332.8 = 12332.8ms. We also half the
  176. circuit available timeout from ~50min down to ~25min, which causes the
  177. client's OR connections to be closed shortly there after when it is idle,
  178. thus reducing overhead.
  179. These two changes cause the padding overhead to go from 309KB per one-time-use
  180. Tor connection down to 69KB per one-time-use Tor connection. For continual
  181. usage, the maximum overhead goes from 103 bytes/sec down to 46 bytes/sec.
  182. If a client opts to completely disable padding, it sends a
  183. CELL_PADDING_NEGOTIATE to instruct the relay not to pad, and then does not
  184. send any further padding itself.
  185. 2.6. Consensus Parameters Governing Behavior
  186. Connection-level padding is controlled by the following consensus parameters:
  187. * nf_ito_low
  188. - The low end of the range to send padding when inactive, in ms.
  189. - Default: 1500
  190. * nf_ito_high
  191. - The high end of the range to send padding, in ms.
  192. - Default: 9500
  193. - If nf_ito_low == nf_ito_high == 0, padding will be disabled.
  194. * nf_ito_low_reduced
  195. - For reduced padding clients: the low end of the range to send padding
  196. when inactive, in ms.
  197. - Default: 9000
  198. * nf_ito_high_reduced
  199. - For reduced padding clients: the high end of the range to send padding,
  200. in ms.
  201. - Default: 14000
  202. * nf_conntimeout_clients
  203. - The number of seconds to keep circuits opened and available for
  204. clients to use. Note that the actual client timeout is randomized
  205. uniformly from this value to twice this value. This governs client
  206. OR conn lifespan. Reduced padding clients use half the consensus
  207. value.
  208. - Default: 1800
  209. * nf_pad_before_usage
  210. - If set to 1, OR connections are padded before the client uses them
  211. for any application traffic. If 0, OR connections are not padded
  212. until application data begins.
  213. - Default: 1
  214. * nf_pad_relays
  215. - If set to 1, we also pad inactive relay-to-relay connections
  216. - Default: 0
  217. * nf_conntimeout_relays
  218. - The number of seconds that idle relay-to-relay connections are kept
  219. open.
  220. - Default: 3600
  221. 3. Circuit-level padding
  222. The circuit padding system in Tor is an extension of the WTF-PAD
  223. event-driven state machine design[15]. At a high level, this design places
  224. one or more padding state machines at the client, and one or more padding
  225. state machines at a relay, on each circuit.
  226. State transition and histogram generation has been generalized to be fully
  227. programmable, and probability distribution support was added to support more
  228. compact representations like APE[16]. Additionally, packet count limits,
  229. rate limiting, and circuit application conditions have been added.
  230. At present, Tor uses this system to deploy two pairs of circuit padding
  231. machines, to obscure differences between the setup phase of client-side
  232. onion service circuits, up to the first 10 cells.
  233. This specification covers only the resulting behavior of these padding
  234. machines, and thus does not cover the state machine implementation details or
  235. operation. For full details on using the circuit padding system to develop
  236. future padding defenses, see the research developer documentation[17].
  237. 3.1. Circuit Padding Negotiation
  238. Circuit padding machines are advertised as "Padding" subprotocol versions
  239. (see tor-spec.txt Section 9). The onion service circuit padding machines are
  240. advertised as "Padding=2".
  241. Because circuit padding machines only become active at certain points in
  242. circuit lifetime, and because more than one padding machine may be active at
  243. any given point in circuit lifetime, there is also a padding negotiation cell,
  244. with fields as follows:
  245. const CIRCPAD_COMMAND_STOP = 1;
  246. const CIRCPAD_COMMAND_START = 2;
  247. const CIRCPAD_RESPONSE_OK = 1;
  248. const CIRCPAD_RESPONSE_ERR = 2;
  249. const CIRCPAD_MACHINE_CIRC_SETUP = 1;
  250. struct circpad_negotiate {
  251. u8 version IN [0];
  252. u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
  253. u8 machine_type IN [CIRCPAD_MACHINE_CIRC_SETUP];
  254. };
  255. When a client wants to start a circuit padding machine, it first checks that
  256. the desired destination hop advertises the appropriate subprotocol version for
  257. that machine. It then sends a circpad_negotiate cell to that hop with
  258. command=CIRCPAD_COMMAND_START, and machine_type=CIRCPAD_MACHINE_CIRC_SETUP (for
  259. the circ setup machine, the destination hop is the second hop in the circuit).
  260. When a relay receives a circpad_negotiate cell, it checks that it supports
  261. the requested machine, and sends a circpad_negotiated cell, which is formatted
  262. as follows:
  263. struct circpad_negotiated {
  264. u8 version IN [0];
  265. u8 command IN [CIRCPAD_COMMAND_START, CIRCPAD_COMMAND_STOP];
  266. u8 response IN [CIRCPAD_RESPONSE_OK, CIRCPAD_RESPONSE_ERR];
  267. u8 machine_type IN [CIRCPAD_MACHINE_CIRC_SETUP];
  268. };
  269. If the machine is supported, the response field will contain
  270. CIRCPAD_RESPONSE_OK. If it is not, it will contain CIRCPAD_RESPONSE_ERR.
  271. Either side may send a CIRCPAD_COMMAND_STOP to shut down the padding machines
  272. (clients MUST only send circpad_negotiate, and relays MUST only send
  273. circpad_negotiated for this purpose).
  274. 3.2. Circuit Padding Machine Message Management
  275. Clients MAY send padding cells towards the relay before receiving the
  276. circpad_negotiated response, to allow for outbound cover traffic before
  277. negotiation completes.
  278. Clients MAY send another circpad_negotiate cell before receiving the
  279. circpad_negotiated response, to allow for rapid machine changes.
  280. Relays MUST NOT send padding cells or circpad_negotiated cells, unless a
  281. padding machine is active. Any padding-related cells that arrive at the client
  282. from unexpected relay sources are protocol violations, and clients MAY
  283. immediately tear down such circuits to avoid side channel risk.
  284. 3.3. Obfuscating client-side onion service circuit setup
  285. The circuit padding currently deployed in Tor attempts to hide client-side
  286. onion service circuit setup. Service-side setup is not covered, because doing
  287. so would involve significantly more overhead, and/or require interaction with
  288. the application layer.
  289. The approach taken aims to make client-side introduction and rendezvous
  290. circuits match the cell direction sequence and cell count of 3 hop general
  291. circuits used for normal web traffic, for the first 10 cells only. The
  292. lifespan of introduction circuits is also made to match the lifespan
  293. of general circuits.
  294. Note that inter-arrival timing is not obfuscated by this defense.
  295. 3.3.1. Common general circuit construction sequences
  296. Most general Tor circuits used to surf the web or download directory
  297. information start with the following 6-cell relay cell sequence (cells
  298. surrounded in [brackets] are outgoing, the others are incoming):
  299. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [BEGIN] -> CONNECTED
  300. When this is done, the client has established a 3-hop circuit and also opened
  301. a stream to the other end. Usually after this comes a series of DATA cell that
  302. either fetches pages, establishes an SSL connection or fetches directory
  303. information:
  304. [DATA] -> [DATA] -> DATA -> DATA...(inbound cells continue)
  305. The above stream of 10 relay cells defines the grand majority of general
  306. circuits that come out of Tor browser during our testing, and it's what we use
  307. to make introduction and rendezvous circuits blend in.
  308. Please note that in this section we only investigate relay cells and not
  309. connection-level cells like CREATE/CREATED or AUTHENTICATE/etc. that are used
  310. during the link-layer handshake. The rationale is that connection-level cells
  311. depend on the type of guard used and are not an effective fingerprint for a
  312. network/guard-level adversary.
  313. 3.3.2 Client-side onion service introduction circuit obfuscation
  314. Two circuit padding machines work to hide client-side introduction circuits:
  315. one machine at the origin, and one machine at the second hop of the circuit.
  316. Each machine sends padding towards the other. The padding from the origin-side
  317. machine terminates at the second hop and does not get forwarded to the actual
  318. introduction point.
  319. From Section 3.3.1 above, most general circuits have the following initial
  320. relay cell sequence (outgoing cells marked in [brackets]):
  321. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [BEGIN] -> CONNECTED
  322. -> [DATA] -> [DATA] -> DATA -> DATA...(inbound data cells continue)
  323. Whereas normal introduction circuits usually look like:
  324. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2
  325. -> [INTRO1] -> INTRODUCE_ACK
  326. This means that up to the sixth cell (first line of each sequence above),
  327. both general and intro circuits have identical cell sequences. After that
  328. we want to mimic the second line sequence of
  329. -> [DATA] -> [DATA] -> DATA -> DATA...(inbound data cells continue)
  330. We achieve this by starting padding INTRODUCE1 has been sent. With padding
  331. negotiation cells, in the common case of the second line looks like:
  332. -> [INTRO1] -> [PADDING_NEGOTIATE] -> PADDING_NEGOTIATED -> INTRO_ACK
  333. Then, the middle node will send between INTRO_MACHINE_MINIMUM_PADDING (7) and
  334. INTRO_MACHINE_MAXIMUM_PADDING (10) cells, to match the "...(inbound data cells
  335. continue)" portion of the trace (aka the rest of an HTTPS response body).
  336. We also set a special flag which keeps the circuit open even after the
  337. introduction is performed. With this feature the circuit will stay alive for
  338. the same duration as normal web circuits before they expire (usually 10
  339. minutes).
  340. 3.3.3. Client-side rendezvous circuit hiding
  341. Following a similar argument as for intro circuits, we are aiming for padded
  342. rendezvous circuits to blend in with the initial cell sequence of general
  343. circuits which usually look like this:
  344. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [BEGIN] -> CONNECTED
  345. -> [DATA] -> [DATA] -> DATA -> DATA...(incoming cells continue)
  346. Whereas normal rendezvous circuits usually look like:
  347. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [EST_REND] -> REND_EST
  348. -> REND2 -> [BEGIN]
  349. This means that up to the sixth cell (the first line), both general and
  350. rend circuits have identical cell sequences.
  351. After that we want to mimic a [DATA] -> [DATA] -> DATA -> DATA sequence.
  352. With padding negotiation right after the REND_ESTABLISHED, the sequence
  353. becomes:
  354. [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [EST_REND] -> REND_EST
  355. -> [PADDING_NEGOTIATE] -> [DROP] -> PADDING_NEGOTIATED -> DROP...
  356. After which normal application DATA cells continue on the circuit.
  357. Hence this way we make rendezvous circuits look like general circuits up
  358. till the end of the circuit setup.
  359. After that our machine gets deactivated, and we let the actual rendezvous
  360. circuit shape the traffic flow. Since rendezvous circuits usually imitate
  361. general circuits (their purpose is to surf the web), we can expect that they
  362. will look alike.
  363. 3.3.4. Circuit setup machine overhead
  364. For the intro circuit case, we see that the origin-side machine just sends a
  365. single [PADDING_NEGOTIATE] cell, whereas the origin-side machine sends a
  366. PADDING_NEGOTIATED cell and between 7 to 10 DROP cells. This means that the
  367. average overhead of this machine is 11 padding cells per introduction circuit.
  368. For the rend circuit case, this machine is quite light. Both sides send 2
  369. padding cells, for a total of 4 padding cells.
  370. 3.4. Circuit padding consensus parameters
  371. The circuit padding system has a handful of consensus parameters that can
  372. either disable circuit padding entirely, or rate limit the total overhead
  373. at relays and clients.
  374. * circpad_padding_disabled
  375. - If set to 1, no circuit padding machines will negotiate, and all
  376. current padding machines will cease padding immediately.
  377. - Default: 0
  378. * circpad_padding_reduced
  379. - If set to 1, only circuit padding machines marked as "reduced"/"low
  380. overhead" will be used. (Currently no such machines are marked
  381. as "reduced overhead").
  382. - Default: 0
  383. * circpad_global_allowed_cells
  384. - This is the number of padding cells that must be sent before
  385. the 'circpad_global_max_padding_percent' parameter is applied.
  386. - Default: 0
  387. * circpad_global_max_padding_percent
  388. - This is the maximum ratio of padding cells to total cells, specified
  389. as a percent. If the global ratio of padding cells to total cells
  390. across all circuits exceeds this percent value, no more padding is sent
  391. until the ratio becomes lower. 0 means no limit.
  392. - Default: 0
  393. * circpad_max_circ_queued_cells
  394. - This is the maximum number of cells that can be in the circuitmux queue
  395. before padding stops being sent on that circuit.
  396. - Default: CIRCWINDOW_START_MAX (1000)
  397. A. Acknowledgments
  398. This research was supported in part by NSF grants CNS-1111539,
  399. CNS-1314637, CNS-1526306, CNS-1619454, and CNS-1640548.
  400. 1. https://en.wikipedia.org/wiki/NetFlow
  401. 2. http://infodoc.alcatel-lucent.com/html/0_add-h-f/93-0073-10-01/7750_SR_OS_Router_Configuration_Guide/Cflowd-CLI.html
  402. 3. http://www.cisco.com/en/US/docs/ios/12_3t/netflow/command/reference/nfl_a1gt_ps5207_TSD_Products_Command_Reference_Chapter.html#wp1185203
  403. 4. http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/70974-netflow-catalyst6500.html#opconf
  404. 5. https://www.juniper.net/techpubs/software/erx/junose60/swconfig-routing-vol1/html/ip-jflow-stats-config4.html#560916
  405. 6. http://www.jnpr.net/techpubs/en_US/junos15.1/topics/reference/configuration-statement/flow-active-timeout-edit-forwarding-options-po.html
  406. 7. http://www.jnpr.net/techpubs/en_US/junos15.1/topics/reference/configuration-statement/flow-active-timeout-edit-forwarding-options-po.html
  407. 8. http://www.h3c.com/portal/Technical_Support___Documents/Technical_Documents/Switches/H3C_S9500_Series_Switches/Command/Command/H3C_S9500_CM-Release1648%5Bv1.24%5D-System_Volume/200901/624854_1285_0.htm#_Toc217704193
  408. 9. http://docs-legacy.fortinet.com/fgt/handbook/cli52_html/FortiOS%205.2%20CLI/config_system.23.046.html
  409. 10. http://wiki.mikrotik.com/wiki/Manual:IP/Traffic_Flow
  410. 11. https://metrics.torproject.org/dirbytes.html
  411. 12. http://freehaven.net/anonbib/cache/murdoch-pet2007.pdf
  412. 13. https://gitweb.torproject.org/torspec.git/tree/proposals/188-bridge-guards.txt
  413. 14. http://www.ntop.org/wp-content/uploads/2013/03/nProbe_UserGuide.pdf
  414. 15. http://arxiv.org/pdf/1512.00524
  415. 16. https://www.cs.kau.se/pulls/hot/thebasketcase-ape/
  416. 17. https://github.com/torproject/tor/tree/master/doc/HACKING/CircuitPaddingDevelopment.md
  417. 18. https://www.usenix.org/node/190967
  418. https://blog.torproject.org/technical-summary-usenix-fingerprinting-paper