Experimental Defense for Website Traffic Fingerprinting
Updated 09/05/2011: Add link to actual implementation patch on gitweb
Website fingerprinting is the act of recognizing web traffic through surveillance despite the use of encryption or anonymizing software. The general idea is to leverage the fact that many web sites have specific fixed request patterns and response byte counts that are known beforehand. This information can be used to recognize your web traffic despite attempts at encryption or tunneling. Websites that have an abundance of static content and a fixed request structure tend to be vulnerable to this type of surveillance. Unfortunately, there is enough static content on most websites for this to be the case.
Early work was quick to determine that simple packet-based encryption schemes (such as wireless and/or VPN encryption) were insufficient to prevent recognition of traffic patterns created by popular websites in the encrypted stream. Later, a small-scale study determined that a lot of information could be extracted from HTTPS streams using these same approaches against specific websites.
Despite these early results, whenever researchers tried naively applying these techniques to Tor-like systems, they failed to come up with publishable results (meaning the attack did not work against Tor), due largely to the fixed 512 byte cell size, as well as the multiplexing of Tor client traffic over a single TLS connection.
However, last month, a group of researchers succeeded in performing this attack where the others had failed. Their success hinged largely on their use of a simplified yet well-chosen feature set for training their classifiers. Where other researchers simply dumped packet sizes and timings into their classifiers and unsurprisingly got poor results against Tor traffic, this group extracted the time, quantity and direction of traffic, and discarded irrelevant control information such as TCP ACKs.
While the research methodology of the attack side of their work is particularly exemplary (certainly the most thorough study in website fingerprinting to date), their results are still likely insufficient to deploy against the network and expect to catch people engaging in single visits of forbidden websites. For such a use case, even their relatively low false positive rate is going to cause a lot of issues when deployed against large quantities of traffic. Concurrent use of multiple AJAX-enabled websites and/or other applications will also frustrate an attacker. Even without concurrent activity, their "open-world" experiment (the most realistic scenario for dragnet surveillance of Tor traffic) shows true positive accuracy of around 55%.
However, repeated observations over long periods of time are likely sufficient to develop certainty. It is also possible that extracting actual application data lengths from Tor TLS headers would add additional accuracy.
Hence, this is a rather nasty attack. It is basically a one-ended version of the age-old end-to-end correlation attack (where an adversary attempts to observe both the entrance to the network and the exits to correlate traffic flows). With the website fingerprinting attack, the adversary only needs to observe a single entry (a bridge, a guard node, or a regional firewall) of the network to begin gathering information about users who use that entry point.
However, because the attack is only one-ended, many defenses that would be useless against the full end-to-end attack can be considered. In the countermeasures section of their paper, the researchers point out that a Firefox addon that simply performs background HTTP requests concurrent to normal user activity was enough to foil their classifier.
We disagree with the background fetch approach because it seems that a slightly more sophisticated attack would train a separate classifier to recognize the background cover traffic and then subtract it before attempting to classify the remainder. In the face of this concern, it seems that the background request defense is not worth the additional network load until it can be further studied in detail.
Instead, we are deploying an experimental defense in today's Tor Browser Bundle release that is specifically designed to reduce the information available for feature extraction without adding overhead. The defense is to enable HTTP pipelining, and to randomize the pipeline size as well as the order of requests. The source code to the implementation can be viewed on gitweb.
Since normal, non-randomized pipelining is still off by default to this day in Firefox, we are assuming that the published attack results are against serialized request/response behavior, which provides significantly more feature information to the attacker. In particular, we believe a randomized pipeline will eliminate or reduce the utility of the 'Size Marker', 'Number Marker', 'Number of Packets', and 'Occurring Packet Sizes' features on sites that support pipelining, due to the batching of requests and responses of arbitrary sizes. More generally, the randomized pipeline should obscure the request vs response size and request ordering information available to the classifier.
Our hope is that the randomized pipeline defense will therefore increase the duration of observation required to establish certainty that a site is being visited, by lowering the true positive rate and/or raising the false positive rate beyond what the researchers observed.
We do not expect this defense to be foolproof. We create it as a prototype, and request that future research papers do not treat the defense as if it were the final solution against website fingerprinting of Tor traffic. In particular, not all websites support pipelining (in fact, an unknown number may deliberately disable it to reduce load), and even those that do will still leak the initial response size as well as the total response size to the attacker. Pipelining may also be disabled by malicious or simply misconfigured exits.
However, the defense could also be improved. We did not attempt to determine the optimal pipeline size or distribution, and are relying on the research community to tweak these parameters as they evaluate the defense.
We could also take more extreme measures, such as building pipelining support into Tor exit nodes. Perhaps better still, we could deploy an HTTP to SPDY translation service at exits. The more efficient request bundling of SPDY would likely obscure request vs response size information yet further. However, as these translations are potentially fragile as well as labor-intensive to implement and deploy, we are unlikely to take these measures without further feedback from and study by the research community.
Alternatively, or perhaps additionally, defenses could be deployed at the obfsproxy plugin layer. Defenses there would not help against an adversary at the bridge or guard node, but would help against regional firewalls.
We would love to hear feedback from the research community about these approaches, and look forward to hearing more results of future attack and defense work along these and other avenues.
Comments
Please note that the comment area below has been archived.
Sounds interesting. Are
Sounds interesting. Are there any plans for an add-on separate from TorBrowser/TorButton that might be used with custom Tor implementations?
No. Wait. Maybe: https://trac
No.
Wait.
Maybe:
https://trac.torproject.org/projects/tor/ticket/1816
But the ability to produce a finished product is outside of our control:
https://trac.torproject.org/projects/tor/wiki/doc/ImportantGoogleChrome…
It is very unlikely that we will be able to achieve the same level of precision in terms of altering browser behavior without significant cooperation from Google (or a fork of Chromium). So for now, might as well keep dancing with the devil you know.
Oh, I was thinking about a
Oh, I was thinking about a separate -Firefox- add-on (for users with custom implementations, not using Torbutton but a proxy chain plus Noscript, BetterPrivacy, CookieSafe, Ghostery, RefControl etc.), but I can see why you would neither pursue nor recommend this. Thanks anyway! Keep up the good work!
If we open multiple
If we open multiple pipelines to get one web page and its linked objects (each of the pipelines carry a random number of HTTP requests), could these pipelines go over different entry guards?
This would somewhat thwart traffic analysis by a roque entry guard because the number of observed objects is not the same as it is on the original web page.
Right now, Tor basically
Right now, Tor basically tries to keep using the same circuit for the same ports for 10 minutes. This should mean that a website should have all of its 3rd party content loaded over the same circuit, but it is not ensured.
Our plan is to ensure that each top-level urlbar domain has all of the third party content go through a single circuit, but that fresh circuits will be used for new urlbar domains: https://trac.torproject.org/projects/tor/ticket/3455
This circuit isolation property is intended to reduce linkability between different site activity by exit nodes.
This means that if you are using concurrent activity as a defense, you have a 1/3 chance (roughly) of that concurrent activity actually using a different guard node than the one that might be targeting you.
Sending third party content down a different circuit is possible, I suppose, but I worry if we do it deliberately, things may break. It will also consume a lot of relay CPU resources for excess circuit creation...
Doesn't this randomization
Doesn't this randomization introduce an individual pattern for otherwise uniform requests?
Jack and Jill visit https://www.torproject.org/ - 1 page, 1 style-sheet, 15 images = 17 requests.
Jack's requests: pipeline1=4, pipeline2=5, pipeline3=2, pipeline4=6.
Jill's requests: pipeline1=7, pipeline2=3, pipeline3=7.
Both visit the same page but they can be distinquished by their individual pipeline patterns.
The pipeline size is
The pipeline size is randomized for each batch of requests.
For example, let's say www.torproject.org has 15 images, and supports pipelining. The request processing looks like this:
As you can see, a new random number P is chosen for each batch of requests. It is not a property of either Jack or Jill's computer. It does not persist. The sequence of P's come from the NSS cryptographically secure random number generator via PK11_GenerateRandom().
The full patch is here:
https://gitweb.torproject.org/torbrowser.git/blob/maint-2.2:/src/curren…
>The pipeline size is
>The pipeline size is randomized for each batch of requests.
Yes, I had this assumption.
Theoretical case:
Jill's ISP records her traffic. Jill visits www.truenews.blog, an unconvenient service for some and therefore all its traffic is watched. The site uses common blog software with rotating images making a correlation by image byte size uncertain.
Jill gets questioned later and claims she had visited some other site instead. Jill's browser used 3 pipelines (3-5-2) for the 10 images on the front page. The other 4 Tor visistors at the time had different pipeline patterns.
After Jill had read 5 pages at www.truenews.blog both ISPs have recorded a distinct pipeline request pattern (3-5-2) (6-2-2) (8-2) (2-3-2-3) (4-2-4).
I think it could be rather convincing to show Jill had indeed visited that blog and rather difficult for Jill to deny it because of the uniqueness of the pattern chain.
You're basically describing
You're basically describing an end-to-end correlation attack against a specific, targeted user. This is not a survivable scenario with or without this patch. The traffic timing information from the combination of the surveilling local ISP and the surveilled blog will be more than enough to doom Jill, regardless of her custom request pattern.
I guess you are arguing that while improving against the one-ended attack, this defense makes the two-ended attack worse as a tradeoff, and that this tradeoff is bad. However, the two-ended attack is really bad already anyway because it gets to use inter-packet timings, connection hiccups, and even active traffic shaping, especially when targeted against specific sites and users.
If we're not talking about actual packet captures at the blog and instead just subpoenaed logs, normal apache logs don't log pipelining status, so that may also be a slightly different story...
The rabbit hole goes pretty deep in terms of tradeoffs and actual threat under different levels of observation detail, but once you start talking about surveillance at both ends, the current understanding in the research community is that it is game over. This assumption may in fact be wrong, though. For an amusing breakdown of the difference between a dragnet global vs targeting adversary, see: http://archives.seul.org/or/dev/Sep-2008/msg00016.html
Is my privacy depending on
Is my privacy depending on "a group of researchers (who) succeeded in performing this attack" or an "experimental defense" nightly build by Tor? Are you preventing or reacting against OnionCoffee and Prime-Project ? This isn't a kid fight. Please, grown up.
What does OnionCoffee and/or
What does OnionCoffee and/or Prime-Project have to do with this?
The research community explored this topic because understanding how the attack might work in practice is essential to developing a proper defense for it. This is how scientific security research works.
Unfortunately, the paper authored by the researchers was significantly more rigorous in its analysis of the attack side as opposed to developing proper defenses. We believe the defense provided by the researchers doesn't seem like it will stand up to a more sophisticated attack, and will just waste network resources. We believe this because there is a history of failures of background cover traffic in the academic literature. The basic stuff definitely doesn't work, and even the complex schemes are regarded as questionable.
Hence, we are deploying an alternate defense that we feel might perform better at no cost to the network.
It is possible that our defense won't stand up either. This is unfortunately the nature of science: We must explore attacks and defenses and deploy only those that we expect to stand to further scrutiny. Sometimes we are wrong, and we must rethink the defense or improve it. This blog post was primarily about properly communicating to the research community why we chose the defense we did, and how they might best study it and defenses like it.
If you disagree with our reasoning, you are of course free to deploy the defense suggested by the researchers in combination. Just keep browser tabs open to few pages that have refresh timers (for example, many news sites that update with new headlines, and also twitter). This will basically be equivalent the defense the researchers suggested.
However, until more research is done, neither of us can be 100% sure we're doing it right. We suspect you'd be doing it wrong, but it's not like we can stop you.
I mean, what do you want from us? Magic?
Sorry for answering late but
Sorry for answering late but I'm on the move. 1st) To your first question: Andriy Panchenko, who did/supervise the job, had worked in OnionCoffee and Prime. The same for other signing the paper. U can see it here: http://lorre.uni.lu/~andriy/ 2nd) The defense Andriy propose is too simple and inneficcient (preparing a new addons?), but leads you to pipelining. So, he's done.It makes u change. 3rd) I'm talking to you coz YOU must try to "prevent" before and not "react" against attacks. Users must believe u when u talk/write, not when u react. Web Fingerprinting is as old as grandma. What were u doing all this years? 4th) I can work with Pipelining just from about:config and a few extra touchs. This is years old.. No ofense. Just thoughts to improve prevention.
1. Ok. 2. Ok. 3. We can't
1. Ok.
2. Ok.
3. We can't make defenses against imaginary attacks. The exact mechanisms of how the attack might work were not successfully demonstrated until Panchenko published. I guess you didn't manage to make it to paragraph #3, where I said that every other research group that tried this attack against Tor failed to produce results?
4. The original about:config options for pipelining don't allow the randomization we implemented.
5. We implemented the randomization specifically because we finally had a proven, published, and peer-reviewed attack mechanism that described and demonstrated exactly what information the attack needed to succeed. We did not have this before. All we had was speculation. It was exactly the attack setup information published by Panchenko that led us to decide on randomized pipelining as a plausible defense (see the paragraph in the blog about the feature-set used and which ones we expect to disable).
6. No offense, but you appear to be a troll. You might want to have that looked at by a professional.
In this context, I think
In this context, I think "prevention" is the use of all types of imaginary attacks. BTW, what would you do if Panchenko didn't publish a thing ? How would you know what to do ? Remember Web Fingerprinting isn't new. Just try to prevent. I know isn't easy. But let all Tor's users be sure that you are thinking in the future, not changing versions when some new attack come into sight. Users must trust in Tor. Trust is everything in privacy. I believe in it.
If Panchencho didn't
If Panchencho didn't publish, we would probably set our packet size to a fixed amount (say, 512 bytes), and multiplex all of the client data on a single TLS stream. This should provide some obfuscation for most protocols (remember, Tor supports more than just HTTP) without too much overhead, complexity or protocol-specific craziness. Oh wait, we did that. And it did. For a decade.
We want the trust in Tor to come from the fact that we are open source and transparent, and that we do Real Science. Everything in Tor is built upon decades of public research. We don't want trust in Tor to come from us doing secret defenses and looking into a crystal ball and guessing what the most devastating imaginary attacks might be, and developing broken pretend defenses against imaginary attacks that probably don't work in reality.
There are plenty of other tools that will sell you magical secret sauce to defend against everything from the future and beyond. We don't do that, because there's another name for that magical secret sauce: Snake Oil.
Now you're talking, not
Now you're talking, not babbling. I disagree with you coz I believe in anticipating attacks and not reacting but is your Tor, not mine. I'll trust in Tor if you protect my privacy. If not, I'll contact Andriy Panchenko. :-)
The answer to your first
The answer to your first question is "yes." A little bird tells me the US military has the same problem protecting communications between its satellites.
From the little research
From the little research I've done, this sounds to me like the right approach.
The defense suggested in the paper is useless against future (better accuracy) attacks. Sounds cosmetic, really.
Shorter, self-similar (i.e. either constant, or resembling white noise -- no in-between) bursts of traffic will degrade accuracy of classifiers very quickly. Pipelining will do this. However, the more information passed in any session, the worse off Tor is in terms of defenses against future attacks. For example, I would *not* chose this approach for instant messaging. There, constant cover traffic (say, bursts every second) is more appropriate.
Good luck.
Why do the new Tor versions
Why do the new Tor versions use Aurora instead of the actual Firefox 6?
Firefox stable release is already in version 6, no need to use the pre-beta Aurora version 6.
Same source code. It is only
Same source code. It is only the graphics that differ.
I think it confuses people
I think it confuses people that it's aurora and not firefox
the tor is not work now in
the tor is not work now in iran! i have used Bridge and test it but it is blocked!!! how we can run again TOR?!?!
https://blog.torproject.org/b
https://blog.torproject.org/blog/iran-blocks-tor-tor-releases-same-day-…