How Tor Circuits Actually Work Under the Hood
Every Tor circuit travels through exactly three relays, but the algorithm selecting those relays determines whether your traffic stays anonymous or gets correlated.
Your traffic enters the Tor network through a guard node, bounces through a middle relay, and exits through an exit node every circuit follows this exact three-hop architecture. This basic design has protected whistleblowers, journalists, activists, and privacy-conscious users since Tor launched in 2002, but the algorithms governing which relays your traffic touches determine whether that protection holds up against well-resourced adversaries.
The Tor network currently operates approximately 8,000 volunteer-run relays serving millions of daily users. Every relay publishes its capabilities, bandwidth, and policies to a group of directory authorities that compile this information into a consensus document. Your Tor client downloads this consensus and uses it to build circuits, selecting relays based on bandwidth weighting and strict constraints designed to prevent attackers from controlling multiple positions in your path.
The official Tor specification enforces several hard rules when building circuits. Your client only selects relays with the Fast flag for any production circuit. Two relays from the same declared family can never appear together. Multiple relays from the same /16 IPv4 subnet or /32 IPv6 subnet are prohibited from sharing a circuit because an adversary controlling that network range could observe traffic at multiple hop positions.
Bandwidth weighting sounds simple pick faster relays more often but the implementation uses 16 position-specific weights published in the consensus (Wgg, Wgm, Wgd, and so on). These weights account for the proportion of total network capacity held by Guard-flagged versus Exit-flagged relays, preventing either category from becoming a bottleneck. If any weight appears malformed in the consensus, clients fall back to a default value of 10000.
Long-lived connections to ports like SSH (22), IRC (6667), or XMPP (5222) require relays with the Stable flag. The specification lists 12 specific ports that trigger this requirement: 21, 22, 706, 1863, 5050, 5190, 5222, 5223, 6523, 6667, 6697, and 8300. Service-side introduction circuits for onion services also mandate stability because dropping those circuits exposes timing patterns.
The first hop in every circuit uses a guard node, and Tor deliberately limits how often you switch guards. Before 2014, clients rotated guards every 60 days. Research demonstrated that this rotation actually increased exposure to end-to-end correlation attacks because an adversary running a fraction of guards would eventually become your entry point through natural churn. Extending guard persistence to 9 months caps cumulative risk for most users, though different threat models may warrant different approaches.
The guard selection algorithm maintains three data structures. SAMPLED_GUARDS persists across Tor sessions and contains relays previously seen with the Guard flag. CONFIRMED_GUARDS tracks guards you have successfully used, ordered by first successful connection. PRIMARY_GUARDS contains your currently preferred guards, composed of confirmed guards plus additional samples.
When your client needs a guard, it first checks if any primary guard shows reachability status of "maybe" or "yes." If multiple primary guards qualify, selection happens uniformly at random from that set. If no primary guards work, the algorithm falls back to confirmed guards, then samples new guards, and only marks guards as "maybe reachable" for retry after exhausting all options.
Each guard expires through two mechanisms. Unlisted guards get removed after REMOVE_UNLISTED_GUARDS_AFTER days of not appearing in the consensus. Guards also expire after GUARD_LIFETIME days if they were either never confirmed or their confirmation happened longer ago than GUARD_CONFIRMED_MIN_LIFETIME. This prevents stale guards from persisting indefinitely while protecting confirmed guards that continue working.
Tor Proposal 171 introduced stream isolation to address a subtle but serious threat: applications sharing circuits create correlation opportunities. Before stream isolation, your browser requests and your email client's connections might traverse the same circuit simply because they used the same exit policy. A malicious exit node observing both traffic types could link your web browsing to your email identity.
Stream isolation forces different applications onto separate circuits using isolation flags. IsolateDestAddr creates a new circuit for each destination address. IsolateDestPort separates traffic by destination port. IsolateSOCKSAuth uses SOCKS authentication credentials to partition circuits, which lets applications like Tor Browser use unique credentials per tab.
Whonix configures most preinstalled applications with dedicated SocksPorts, ensuring no identity correlation between applications. Tails implements similar isolation, routing Tails-specific applications through dedicated ports. Both projects demonstrate that stream isolation requires deliberate configuration because default Tor behavior allows circuit sharing for efficiency.
The tradeoff: more circuits mean more load on the network. A 2025 preprint study examining path selection strategies found that congestion-aware selection showed maximum improvements of 42% over default bandwidth-weighted selection in simulated throughput tests. Geographic latency-optimized selection achieved the lowest latency at 40ms. But these optimizations potentially reduce anonymity by making path selection predictable.
Tor only supports TCP, which creates a problem for DNS queries that traditionally use UDP. The solution involves Tor's DNSPort and TransPort configuration options. Tails routes all DNS requests through Tor's DNSPort while properly configured systems block UDP packets entirely using firewall rules no UDP escapes when you set up your system correctly because Tor cannot tunnel it.
Transparent torification through iptables redirects all outbound TCP to Tor's TransPort and all DNS to DNSPort. A typical configuration sets VirtualAddrNetworkIPv4 to 10.192.0.0/10, enables AutomapHostsOnResolve, and configures TransPort with isolation flags. This forces every application's traffic through Tor regardless of whether the application was designed for it.
DNSPort routes all DNS queries through shared circuits rather than isolating them per destination like SocksPort provides, but this gap affects correlation risk rather than causing traditional DNS leaks. Virtualized solutions like Whonix provide stronger guarantees by routing the entire guest operating system through a Tor gateway, preventing any application from bypassing the proxy.
The FlashFlow measurement framework discovered that Tor's current bandwidth measurement system underestimates actual relay capacity by approximately 50%. Relays can handle significantly more traffic than the consensus advertises, meaning path selection algorithms make suboptimal choices based on outdated capacity information.
Congestion-aware algorithms like ABRA (Avoiding Bottleneck Relay Algorithm) improved median throughput by nearly 20% by dynamically routing around overloaded relays. Having three pre-built circuits available lets clients identify fast circuits before committing traffic, improving median time-to-first-byte by 15%. Path selection incorporating geographic information alongside bandwidth achieved 20% faster median TTFB and 11% faster total download times.
The Tor Project continues revisiting these tradeoffs in ongoing specification discussions. The fundamental tension persists: optimizing for performance narrows the anonymity set, while maximizing anonymity often means accepting slower connections.
Configure stream isolation for any application handling sensitive traffic. Use IsolateSOCKSAuth when running multiple identities through the same Tor instance. Route DNS exclusively through Tor's DNSPort and block all UDP at the firewall level. For most threat models, accepting the 9-month guard persistence reduces your anonymity risk because fighting it exposes you to more potential adversaries.
Monitor your guard selection by checking Tor's control port output for guard changes. Unexpected guard rotation after a short period suggests either network instability or potential interference. The Nyx tool (formerly ARM) provides real-time visibility into circuit construction and guard status.
For maximum isolation, consider virtualized solutions like Whonix that route all guest OS traffic through a dedicated Tor gateway VM. This architecture prevents even kernel-level exploits from leaking your real IP address because the guest system has no network interface capable of reaching the internet directly.
The Tor network protects your traffic through cryptographic layering and probabilistic path selection, but those protections depend on proper circuit management. Understanding how path selection actually works lets you configure your setup appropriately and recognize when something behaves abnormally.