(TOR) internal architecture

1, in order to affect the flow through Tor relay client traffic to understand the internal processes. [1]

1 TCP connection (Connection) multiplexing. Tor all relay using paired TCP connections in communication, each relay i.e. each relay communication with each other to form a single TCP connection. Since the pair of relay may simultaneously a plurality of circuit (Circuit) data transfer, so all the circuit between the pair of single TCP connection through which multiplexed. A plurality of service circuits may carry or each user may access the flow (stream) flow. TCP provides reliability when multiplexed connection, the transport packets sequentially between the relay and may be unfair kernel-level congestion control [47] . Understanding connection, a different circuit, and these three streams is very important for understanding Tor.

2 TCP connection input. Tor uses TCP processing core libevent input and output buffers. Tor uses a socket registered to read libevent, and configure notification callback function. When data arrives kernel TCP input buffer (figure 1a), libevent polling the interface through which the active sockets understanding, and performs a corresponding asynchronous callback (figure 1b). When executed, the read callback token bucket to determine eligibility read.

Connection speed for token bucket. Tor arranged according to bandwidth limit at 1 second intervals filling the tub, while removing the token from the bucket when data is read, although varying the spacing currently being explored to improve performance [53]. There is a global read from all the memory area limits the read bandwidth connection, and a separate storage area, intended to be limiting (figure 1c) on a per connection. If the global bucket or the bucket is empty connection, the connection may ignore read events. In fact, for each edge connected to the token bucket only (non-relay) is connected. Noise penalty is connected via connection limit (e.g. bulk transfer) to reduce network congestion, and often results in better performance [17].

When a TCP input buffer reading conditions meet cycle (RR) scheduling mechanism for each of the connection 18 and 16 KiB global token bucket size smaller one (figure 1d) to read. This limitation is to be fair, so that a single connection can not consume all tokens on a single global read. However, recent studies have shown that input / output schedules could lead to an unfair distribution of resources [54]. After application of each connection on TCP buffer data read from the input buffer for processing (figure 1e), specifically, the data in the input buffer reaches the connection, the cell immediately to rearrange their circuitry queue (FIFO order), where they wait for further processing. It is also used to ensure that the local anonymous cryptographic operations. As previous work had shown that these operations are not a performance bottleneck [61] . So where you can safely ignore them.

3 flow control. Tor-end flow control algorithms used to help maintain stability of the battery through the circuit. The client and the outlet edge of the circuit configuration of the relay: through the inlet and outlet are each data point Tor network. An edge counter unit using tracking data stream called the window circuit and a stream. Corresponding flow inlet edge and decreasing the transmission circuit cell window, the window when it reaches zero, the flow stops reading from the stream, and when the reading window reaches zero, the circuit stops all the streams from the multiplexed circuit. Upon receiving confirmation from the outlet edge of the cell to SENDME, Windows incremented read and restored.

By default, the window circuit is initialized to 1000 units (500 KiB), flow window 500 units (250 KiB). After receiving 100 cells (50 KiB) at the outlet edge, it is transmitted to the circuit SENDME inlet edge, the edge of the inlet to allow reading, forwarding, and 100 additional packaging cells. After receiving 50 cells (25 KiB) and allows transmission stream SENDME another 50 cells. Window size have a significant impact on performance, recent work presents a dynamic calculation of their algorithms [7].

Cells were treated 4 and queuing. When reaching the data input buffer is connected immediately processed (figure 1f), and each unit of encryption or decryption circuit in accordance with its direction. The unit is then switched to the next-hop corresponding to the circuit, the circuit and placed in the FIFO (FIFO) queue (figure 1g). In the circuit unit queue waiting until they are selected by the scheduler circuit to write.

5 schedule. When the output buffer is connected to the space available, the relay decided which of the plurality of multiplexing circuits in a write. Whereas historically this is accomplished using a cyclic process, but recently introduced a new exponential weighted moving average (the EWMA) scheduler Tor [52], and the current default (Figure IH) . EWMA record it as the number of packets scheduled for each circuit, with time decays exponentially packet count. Writing a scheduler unit selected from a circuit having the lowest count of packets in time, and then update count. Attenuation means packets recently transmitted a greater impact on the count, and burst traffic does not significantly affect the scheduling priority.

6 is connected to the output. Selected and written to connect the output buffer unit (figure 1i) will be activated for the write event registration connection of libevent. Once determined libevent can write TCP socket, asynchronous execution will write callback (figure 1j). Similarly to the input connector, relay checks the global write tub and connected to each token bucket written. If the bucket is not empty, writing the connection conditions are met (figure 1k), and write enable 18 and 16 KiB global token bucket size (figure 1l) each connected again. TCP kernel-level data is written into the buffer (figure 1m) and sent to the next hop.

references:

[1] Jansen R, Syverson P, Hopper N. Throttling tor bandwidth parasites[C]//Presented as part of the 21st {USENIX} Security Symposium ({USENIX} Security 12). 2012: 349-363.

[47] REARDON, J., AND GOLDBERG, I. Improving Tor using a TCPover-DTLS tunnel. In Proceedings of the 18th USENIX Security Symposium (2009).

[17] DINGLEDINE, R. Research problem: adaptive throttling of Tor clients by entry guards. https://blog.torproject. org/blog/research-problem-adaptivethrottling-tor-clients-entry-guards.

[54] TSCHORSCH, F., AND SCHEUERMANN, B. Tor is Unfair–and What to Do About It, 2011.

[7] ALSABAH, M., BAUER, K., GOLDBERG, I., GRUNWALD, D., MCCOY, D., SAVAGE, S., AND VOELKER, G. DefenestraTor: Throwing out Windows in Tor. In Proceedings of the 11th International Symposium on Privacy Enhancing Technologies (2011).

[52] TANG, C., AND GOLDBERG, I. An Improved Algorithm for Tor Circuit Scheduling. In Proceedings of the 17th ACM Conference on Computer and Communications Security (2010), pp. 329–339.

[61]Reardon, Joel, and Ian Goldberg. "Improving Tor using a TCP-over-DTLS tunnel." Proceedings of the 18th conference on USENIX security symposium. USENIX Association, 2009.

Guess you like

Origin blog.csdn.net/qq_24598059/article/details/88087308
Tor