Yesterday was the third and final day of AIMS-5. With the main topic being Detection of Censorship, Filtering, and Outages, many of these talks were much more in line with what I know and what I’m working on. I gave my presentation as well, you can see it (along with a link to my slides) down below.
Emile Aben (RIPE NCC), Hurricane Sandy, as seen by RIPE Atlas
This presentation had some really interesting visuals watching large parts of the Internet getting rerouted around New York during Hurricane Sandy. Interestingly, they found that most of the new routes only changed IP addresses rather than ASes, which implies that most of the rerouting took place within networks rather than on completely new routes. They also showed how a lot of the traffic that would normally go through the New York link shifted to the other large trans-Atlantic link at Ashburn/Washington DC.
John Heidemann (USC/Information Sciences Institute), Long-term analysis of outages at the Internet edge
This was one of the larger data sets I’ve seen about long-term outage analysis. He had an interesting point that while the Arab Spring events that results in a complete shutdown in Egypt, there was another event in Australia that actually involved more users in terms of absolute numbers (although a lower percentage of overall users). Yet you never hear about that one. They’re working on expanding their network and I can only image what could be done with even more such data. Here’s a link to their blog and here are their papers if you’d like to read more.
Phillipa Gill (The Citizen Lab/Stony Brook University), Characterizing Global Web Censorship: Why is it so hard?
This presentation more clearly stated just what exactly is hard about censorship than any other I’d seen. She uses data from the OpenNet Initiative which is a much larger and longer term source than what I have access to, but even then there are problems with generating a picture of censorship as it varies geographically (even within a country) or temporally. In addition, she did a good job of showing just how hard it is to find just exactly what is being censored as opposed to just having flaky network connections (where some countries change their behavior between the two depending on the topic involved). I would love to be able to work with OpenNet’s data to confirm some of the things that I’ve seen but just haven’t been able to get vantage points to measure.
Ramakrishnan Durairajan (University of Wisconsin, Madison), Internet Atlas: A Geographic Database of the Physical Internet
This is the sort of thing I’ve been looking for what feels like forever. Essentially, they’ve taken as many maps as they can find about network infrastructures and they’re putting together a map of just how the Internet is connected in the real world. They’ve done a lot of work to parse maps as image files, text content in a number of formats, and even more exotic forms of networking data and put together what looks like a very solid source of data. It’s not publicly available yet, but I look forward to when it is.
Aaron Schulman (University of Maryland), Pingin’ II: Now we’re Analyzin’
As a followup to the original Pingin’ in the rain paper which showed how something as relatively as a storm can have large effects on timing and connection information on networks–including cable networks. Here, he talks about how he analyzes that data and shows a number of cases that area really strange; for example, he shows a case where, given ten pinging sources, they are all consistently successful for twelve hours then–as if someone had hit a switch–suddenly all ten (from around the world) get really inconsistent. Twelve hours later, they’re fine again. There were a few ideas about what could have cause some of the issues, but nothing consistent.
Nick Feamster (Georgia Tech), Exposing Inconsistent Web Search Results with Bobble
This is a continuation from Eli Pariser’s The Filter Bubble, one of the earlier works about how we each live in our own bubble when it comes to what we see on search engines and content aggregaters. What this talk specifically about was a Chrome extension called Bobble which leverages PlanetLab to allow users to see what other machines around the world would see for the same query. It’s interesting to see what is returned in other places around the world but not for you (30% were Top 3 for someone else, 86% were Top 10) or results that only you saw but no one else did (about 2%). It’s really interesting work and I recommend you check it out.
And that’s it. All that’s left is to head back to Indiana and try to digest everything I’ve learned here. It’s been a long but extremely useful three days.