Scraping Gdpr Fines

[This article was first published on Category R on Roel's R-tefacts, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The website Privacy Affairs keeps a list of fines related to GDPR. I heard * that this might be an interesting dataset for TidyTuesdays. The dataset contains at this moment 250 fines given out for GDPR violations and is last updated (according to the website) on 31 March 2020.

All data is from official government sources, such as official reports of national Data Protection Authorities.

The largest fine is €50,000,000 on Google Inc. on January 21 , 2019 – in France, and the smallest is actually 0 euros, but the website says 90.

Scraping

I use the {rvest} package to scrape the website.

Before you start

I first checked the robots.txt of this website. And it did not disallow me to scrape the website.

The scraping

I thought this would be easy and done in a minute. But there were some snafus. It works for now, but if the website changes a bit this scraping routine will not work that well anymore. It extracts the script part of the website and extracts the data between ‘[’ and ’]’. If anyone has ideas on making this more robust, be sure to let me know over twitter.

Details about the scraping part

First I noticed that the website doesn’t show you all of the fines. But when we look at the source of the page it seems they are all there. It should be relatively simple to retrieve the data, the data is in the javaScript part (see picture).

Image of sourcecode of the website

But extracting that data is quite some more work:

  • First find the < script > tag on the website
  • Find the node that contains the data
  • Realize that there are actually two datasources in here
library(rvest)
## Loading required package: xml2
link<- "https://www.privacyaffairs.com/gdpr-fines/"
page <- read_html(link)
temp <- page %>% html_nodes("script") %>% .[9] %>%
rvest::html_text() 
  • cry (joking, don’t give up! The #rstats community will help you!)
  • do some advanced string manipulation to extract the two json structures
  • Read the json data in R
ends <- str_locate_all(temp, "\\]")
starts <- str_locate_all(temp, "\\[")
table1 <- temp %>% stringi::stri_sub(from = starts[[1]][1,2], to = ends[[1]][1,1]) %>%
str_remove_all("\n") %>%
str_remove_all("\r") %>%
jsonlite::fromJSON()
table2 <- temp %>% stringi::stri_sub(from = starts[[1]][2,2], to = ends[[1]][2,1]) %>%
str_remove_all("\n") %>%
str_remove_all("\r") %>%
jsonlite::fromJSON()
  • Profit

I also tried it in pure text before I gave up and returned to html parsing. You can see that in the repo.

(*) I was tricked through twitter #rstats on #tidytuesday

Twitter user hrbrmstr tricked me into doing this https://twitter.com/hrbrmstr/status/1247476867621421061

To leave a comment for the author, please follow the link and comment on their blog: Category R on Roel's R-tefacts.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)