I apologize up-front for using bad words in this post.
Said bad words include “Facebook”, “Mark Zuckerberg” and many referrals to entities within the U.S. Government. Given the topic, it cannot be helped.
I’ve also left the
R tag on this despite only showing some ggplot2 plots and Markdown tables. See the end of the post for how to get access to the code & data.
R was used solely and extensively for the work behind the words.
This week Congress put on a show as they summoned the current Facebook CEO — Mark Zuckerberg — down to Washington, D.C. to demonstrate how little most of them know about how the modern internet and social networks actually work plus chest-thump to prove to their constituents they really and truly care about you.
These Congress-critters offered such proof in the guise of railing against Facebook for how they’ve handled your data. Note that I should really say our data since they do have an extensive profile database on me and most everyone else even if they’re not Facebook platform users (full disclosure: I do not have a Facebook account).
Ostensibly, this data-mishandling impacted your privacy. Most of the committee members wanted any constituent viewers to come away believing they and their fellow Congress-critters truly care about your privacy.
Fortunately, we have a few ways to measure this “caring” and the remainder of this post will explore how much members of the U.S. House and Senate care about your privacy when you visit their official
.gov web sites. Future posts may explore campaign web sites and other metrics, but what better place to show they care about you then right there in their digital houses.
When you visit a web site with any browser, the main URL pulls in resources to aid in the composition and functionality of the page. These could be:
- HTML (the main page is very likely HTML unless it’s just a media URL)
- images (
gif, “svg”, etc),
- CSS (the “style sheet” that tells the browser how to decorate and position elements on the page)
- binary objects (such as embedded PDF files or “protocol buffer” content)
- XML or JSON
(plus some others)
When you go to, say,
www.example.com the site does not have to load all the resources from
example.com domains. In fact, it’s rare to find a modern site which does not use resources from one or more third party sites.
As you move along the web, these third-party touch-points add up. To demonstrate this, I did my best to de-privatize my browser and OS configuration and visited 12 web sites while keeping a fresh install of Firefox Lightbeam running. Here’s the result:
Each main circle is a distinct/main site and the triangles are resources the site tried to load. The red triangles indicate a common third-party resource that was loaded by two or more sites. Each of those red triangles knows where you’ve been (again, unless you’ve been very careful/paranoid) and can use that information to enhance their knowledge about you.
There are other issues with trusting loaded browser content, but we’ll cover that a bit further into the investigation.
The word “privacy” was used over 100 times each day by both Zuckerberg and our Congress-critters. Senators and House members made it pretty clear Facebook should care more about your privacy. Implicit in said posit is that they, themselves, must care about your privacy. I’m sure they’ll be glad to point out all along the midterm campaign trails just how much they’re doing to protect your privacy.
We don’t just have to take their word for it. After berating Facebook’s chief college dropout and chastising the largest social network on the planet we can see just how much of “you” these representatives give to Facebook (and other sites) and also how much they protect you when you decide to pay them[†] [‡] a digital visit.
For this metrics experiment, I built a crawler using R and my
splashr package which, in turn, uses ScrapingHub’s open source
Splash. Splash is an automation framework that lets you programmatically visit a site just like a human would with a real browser.
Normally when one scrapes content from the internet they’re just grabbing the plain, single HTML file that is at the target of a URL.
By capturing the entire browser experience for the main page of each member of Congress we can get a pretty good idea of just how much each one cares about your digital privacy, and just how much they secretly love Facebook.
Let’s take a look, first, at where you go when you digitally visit a Congress-critter.
Each House and Senate member has an official (not campaign) site that is hosted on a
.gov domain and served up from a handful of IP addresses across the following (
n is the number of Congress-critter web sites):
|AS7016||Comcast Cable Communications, LLC||95|
|AS20940||Akamai International B.V.||13|
|AS1999||U.S. House of Representatives||6|
|AS7843||Time Warner Cable Internet LLC||1|
|AS16625||Akamai Technologies, Inc.||1|
“Orange” is really Akamai and Akamai is a giant content delivery network which helps web sites efficiently provide content to your browser and can offer Denial of Service (DoS) protection. Most sites are behind Akamai, which means you “touch” Akamai every time you visit the site. They know you were there, but I know a sufficient body of folks who work at Akamai and I’m fairly certain they’re not too evil. Virtually no representative solely uses House/Senate infrastructure, but this is almost a necessity given how easy it is to take down a site with a DoS attack and how polarized politics is in America.
To get to those IP addresses, DNS names like
www.king.senate.gov (one of the Senators from my state) needs to be translated to IP addresses. DNS queries are also data gold mines and everyone from your ISP to the DNS server that knows the name-to-IP mapping likely sees your IP address. Here are the DNS servers that serve up the directory lookups for all of the House and Senate domains:
Akamai kinda does need to serve up DNS for the sites they host, so this list also makes sense. But, you’ve now had two touch-points logged and we haven’t even loaded a single web page yet.
Safe? & Secure? Connections
When we finally make a connection to a Congress-critter’s site, it is going to be over SSL/TLS. They all support it (which is , but SSL/TLS confidentiality is not as bullet-proof as many “HTTPS Everywhere” proponents would like to con you into believing). However, I took a look at the SSL certificates for House and Senate sites. Here’s a sampling from, again, my state (one House representative):
*.house.gov “Common Name (CN)” is a wildcard certificate. Many SSL certificates have just one valid CN, but it’s also possible to list alternate, valid “alt” names that can all use the same, single certificate. Wildcard certificates ease the burden of administration but it also means that if, say, I managed to get my hands on the certificate chain and private key file, I could setup
vladimirputin.house.gov somewhere and your browser would think it’s A-OK. Granted, there are far more Representatives than there are Senators and their tenure length is pretty erratic these days, so I can sort of forgive them for taking the easy route, but I also in no way, shape or form believe they protect those chains and private keys well.
In contrast, the Senate can and does embed the alt-names:
Are We There Yet?
We’ve got the IP address of the site and established a “secure” connection. Now it’s time to grab the index page and all the rest of the resources that come along for the ride. As noted in the Privacy Primer (above), the loading of third-party resources is problematic from a privacy (and security) perspective. Just how many third party resources do House and Senate member sites rely on?
To figure that out, I tallied up all of the non-
.gov resources loaded by each web site and plotted the distribution of House and Senate (separately) in a “beeswarm” plot with a boxplot shadowing underneath so you can make out the pertinent quantiles:
As noted, the median is around 30 for both House and Senate member sites. In other words, they value your browsing privacy so little that most Congress-critters gladly share your browser session with many other sites.
We also talked about confidentiality above. If an
https site loads
http resources the contents of what you see on the page cannot but guaranteed. So, how responsible are they when it comes to at least ensuring these third-party resources are loaded over
You’re mostly covered from a pseudo-confidentiality perspective, but what are they serving up to you? Here’s a summary of the MIME types being delivered to you:
|MIME Type||Number of Resources Loaded|
We’ll cover some of these in more detail a bit further into the post.
Facebook & “Friends”
Facebook started all this, so just how cozy are these Congress-critters with Facebook?
Turns out that both Senators and House members are very comfortable letting you give Facebook a love-tap when you come visit their sites since over 60% of House and 40% of Senate sites use 2 or more Facebook resources. Not all Facebook resources are created equal[ly evil] and we’ll look at some of the more invasive ones soon.
Facebook is not the only devil out there. I added in the public filter list from Disconnect and the numbers go up from 60% to 70% for the House and from 40% to 60% for the Senate when it comes to a larger corpus of known tracking sites/resources.
Here’s a list of some (first 20) of the top domains (with one of Twitter’s media-serving domains taking the individual top-spot):
|Main third-party domain||# of ‘pings’||%|
So, when you go to check out what your representative is ‘officially’ up to, you’re being served…up on a silver platter to a plethora of sites where you are the product.
It’s starting to look like Congress-folk aren’t as sincere about your privacy as they may have led us all to believe this week.
A [Java]Script for Succes[ful Privacy Destruction]
Virtually all of them.
To make matters worse, no
.gov or third-party resource of any kind was loaded using subresource integrity validation. Subresource integrity validation means that the site owner — at some point — ensured that the resource being loaded was not malicious and then created a fingerprint for it and told your browser what that fingerprint is so it can compare it to what got loaded. If the fingerprints don’t match, the content is not loaded/executed. Using subresource integrity is not trivial since it requires a top-notch content management team and failure to synchronize/checkpoint third-party content fingerprints will result in resources failing to load.
Congress was quick to demand that Facebook implement stronger policies and controls, but they, themselves, cannot be bothered.
There are plenty more avenues to explore in this data set (such as “security headers” — they all 100% use
strict-transport-security pretty well, but are deeply deficient in others) and more targets for future works, such as the campaign sites of House and Senate members. I may follow up with a look at a specific slice from this data set (the members of the committees who were berating Zuckerberg this week).
The bottom line is that while the beating Facebook took this week was just, those inflicting the pain have a long way to go themselves before they can truly judge what other social media and general internet sites do when it comes to ensuring the safety and privacy of their visitors.
In other words, “Legislator, regulate thyself” before thy regulatists others.
Apart from some egregiously bad (or benign) examples, I tried not to “name and shame”. I also won’t answer any questions about facets by party since that really doesn’t matter too much as they’re all pretty bad when it comes to understanding and implementing privacy and safey on their sites.
It’s going to take me a bit to figure out hosting for the code & data for this analysis. The data set is at or larger than 3GB of (heavily nested data frame) content and that’s not going to work on GitHub. Since I have no idea how “popular” this data will be, I can’t even shunt it off to S3 or my own site without possibly incurring unplanned costs. My code is also not as “pristine” as it usually is since this was a scant-time side project of a side project. However, I’ll gladly show all the code scars upon inquiry. I will say that repeating this from scratch without code is not too difficult provided you’ve got some patience for the scrape-time.
If you are a legit researcher and would like a copy of the data/code drop a note with a well-described purpose/project. This data holds no secret information so it’s unlikely your project is (or should be) a secret, so providing details publicly should not be an issue (and you may even find a collaborator).
I didn’t do this for any type of “attention”, media-or-otherwise. I was curious about this topic and figured others might be as well. However, I grok that members of the press are less likely to make a public inquiry, so if this post did get your attention, feel free to drop a note to my e-mail (which I’m going to assume any decent journalist can readily find) and I’ll gladly work with you on getting you what you need privately.