Rescuing Twapperkeeper Archives Before They Vanish

[This article was first published on OUseful.Info, the blog... » Rstats, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A couple of years or so ago, various JISC folk picked up on the idea that there might be value in them thar tweets and started encouraging the use of Twapperkeeper for archiving hashtagged tweets around events, supporting the development of that service in exchange for an open source version of the code. Since then, Twapperkeeper has been sold on, with news out this week that the current Twapperkeeper archives will be deleted early in the New Year.

Over on the MASHE blog (one of the few blogs in my feeds that just keeps on delivering…), Martin Hawksey has popped up a Twapperkeeper archive importer for Google Spreadsheets that will grab up to 15,000 tweets from a Twapperkeeper archive and store them in a Google Spreadsheet (from where I’m sure you’ll be able to tap in to some of Martin’s extraordinary Twitter analysis and visualisation tools, as well as exporting Tweets to NodeXL (I think?!)).

The great thing about Martin’s “Twapperkeeper Rescue Tool” (?!;-) is that archived Tweets are hosted online, which makes them accessible for web based hacks if you have the Spreadsheet key. But which archives need rescuing? One approach might be to look to see what archives have been viewed using @andypowe11′s Summarizr? I don’t know if Andy has kept logs of which tag archives folk have analysed using Summarizr, but these may be worth grabbing? I don’t know if JISC has a list of hashtags from events they want to continue to archive? (Presumably a @briankelly risk assessment goes into this somewhere? By the by, I wonder in light of the Twapperkeeper timeline whether Brian would now feel the need to change any risk analysis he might have posted previously advocating the use of a service like Twapperkeeper?)

A more selfish approach might be to grab one or more Twapperkeeper archives onto your own computer. Grab your TwapperKeeper Archive before Shutdown! describes how to use R to do this, and is claimed to work for rescues of up to 50,000 tweets from any one archive.

Building on the R code from that example, along with a couple of routines from my own previous R’n’Twitter experiments, here’s some R code that will grab items from a Twapperkeeper archive and parse them into a dataframe that also includes Twitter IDs, sender, to and RT information:

require(XML)
require(stringr)
twapperkeeperRescue=function(hashtag){
    #startCrib: http://libreas.wordpress.com/2011/12/09/twapperkeeper/
    url <- paste("http://twapperkeeper.com/rss.php?type=hashtag&name=",hashtag,"&l=50000", sep="")
    doc <- xmlTreeParse(url,useInternal=T)
    tweet <- xpathSApply(doc, "//item//title", xmlValue)  
    pubDate <- xpathSApply(doc, "//item//pubDate", xmlValue)
    #endCrib
    df=data.frame(cbind(tweet,pubDate))
    df$from=sapply(df$tweet,function(tweet) str_extract(tweet,"^([[:alnum:]_]*)"))
    df$id=sapply(df$tweet,function(tweet) str_extract(tweet,"[[:digit:]/s]*$"))
    df$txt=sapply(df$tweet,function(tweet) str_trim(str_replace(str_sub(str_replace(tweet,'- tweet id [[:digit:]/s]*$',''),end=-35),"^([[:alnum:]_]*:)",'')))
    df$to=sapply(df$txt,function(tweet) str_trim(str_extract(tweet,"^(@[[:alnum:]_]*)")))
    df$rt=sapply(df$txt,function(tweet) str_trim(str_match(tweet,"^RT (@[[:alnum:]_]*)")[2]))
    return(df)
}
#usage: 
#tag='ukdiscovery'
#twarchive.df=twapperkeeperRescue(tag)

#if you want to save the parsed archive:
twapperkeeperSave=function(hashtag,path='./'){
    tweet.df=twapperkeeperRescue(hashtag)
    fn <- paste(path,"twArchive_",hashtag,".csv")
    write.csv(tweet.df,fn)
}
#usage:
#twapperkeeperSave(tag)

If I get a chance, I’ll try to post some visualisation/analysis functions too…


To leave a comment for the author, please follow the link and comment on their blog: OUseful.Info, the blog... » Rstats.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)