Click on above graph to enlarge.
The data for this graph was collected automatically every ~60 seconds of the VP debate on 10/11/2012, with an ending aggregate sample size of 363,163 tweets. From this dataset duplicate tweets were removed (because of bots), which gave a final dataset of 81,124 remaining unique tweets (52,303-Biden, 28,821-Ryan). Every point in this graph is the mean sentiment of tweets gathered for that minute. The farther above zero the point is means that it is higher positive sentiment of the tweets, and the lower it gets below zero the more negative. It would be very interesting to compare this to the transcript for inference. The one very noticeable take away is the jump in sentiment as soon as the debate was over at 22:30
R Code for this data collection and graphing
To collect this data I updated my original code from the presidential debate as follows:
textRyan=laply(Ryan, function(t) t$getText())
textBiden=laply(Biden, function(t) t$getText())
resultRyan=score.sentiment(textRyan, positive.words, negative.words)
resultBiden=score.sentiment(textBiden, positive.words, negative.words)
Then to have it R run automatically collect the data every 60 seconds in an endless loop (I wasn’t sure when I wanted to stop it at the time) you just run a repeat function.
debate<-merge(x, debate, all=TRUE)
At 10:56pm I got bored and the debate was over, so I just hit stop and ran the following to get the graph:
x$minute<-strptime(x$time, "%a %b %d %H:%M:%S %Y")
Biden Ryan mean<-data.frame(period, Biden, Ryan)
dfm ggplot(dfm, aes(period, value, colour=variable, group=variable, xlab="time", ylab="score"))+
axis.ticks = theme_blank(),axis.title.y=theme_blank())
I have to admit, doing this actually made watching the debate kind of fun.
For cleaner access to the code please go to my git hub