After reading The Life Changing Magic of Tidying Text and A tidy text analysis of Rick and Morty I wanted to do something similar for Rick and Morty and I did. Now I’m doing something similar for Gravity Falls.
In this post I’ll focus on the Tidy Data principles. However, here is the Github repo with the scripts to scrap the subtitles of Rick and Morty and other shows.
Note: If some images appear too small on your screen you can open them in a new tab to show them in their original size.
The subtools package returns a data frame after reading srt files. In addition to that resulting data frame I wanted to explicitly point the season and chapter of each line of the subtitles. To do that I had to scrap the subtitles and then use
str_replace_all. To follow the steps clone the repo from Github:
git clone https://github.com/pachamaltese/rick_and_morty_tidy_text
Gravity Falls Can Be So Tidy
After reading the tidy file I created after scraping the subtitles, I use
unnest_tokens to divide the subtitles in words. This function uses the tokenizers package to separate each line into words. The default tokenizing is for words, but other options include characters, sentences, lines, paragraphs, or separation around a regex pattern.
if (!require("pacman")) install.packages("pacman") p_load(data.table, tidyr, tidytext, dplyr, ggplot2, viridis, ggstance, igraph, ggraph) p_load_gh("dgrtwo/widyr") gravity_falls_subs <- as_tibble(fread("../../data/2017-10-13-rick-and-morty-tidy-data/gravity_falls_subs.csv")) %>% mutate(text = iconv(text, to = "ASCII")) %>% drop_na() gravity_falls_subs_tidy <- gravity_falls_subs %>% unnest_tokens(word,text) %>% anti_join(stop_words)
The data is in one-word-per-row format, and we can manipulate it with tidy tools like dplyr. For example, in the last chunk I used an
anti_join to remove words such a “a”, “an” or “the”.
Then we can use
count to find the most common words in all of Gravity Falls episodes as a whole.
gravity_falls_subs_tidy %>% count(word, sort = TRUE)
# A tibble: 7,541 x 2 word n
1 mabel 456 2 hey 453 3 ha 416 4 stan 369 5 dipper 347 6 gonna 341 7 time 313 8 yeah 291 9 uh 264 10 guys 244 # … with 7,531 more rows
Sentiment analysis can be done as an inner join. There is one sentiment lexicon in the tidytext package. Let’s examine how sentiment changes changes during each season. Let’s count the number of positive and negative words in the chapters of each season.
gravity_falls_sentiment <- gravity_falls_subs_tidy %>% inner_join(sentiments) %>% count(episode_name, index = linenumber %/% 50, sentiment) %>% spread(sentiment, n, fill = 0) %>% mutate(sentiment = positive - negative) %>% left_join(gravity_falls_subs_tidy[,c("episode_name","season","episode")] %>% distinct()) %>% arrange(season,episode) %>% mutate(episode_name = paste(season,episode,"-",episode_name), season = factor(season, labels = paste("Season", 1:2))) %>% select(episode_name, season, everything(), -episode) gravity_falls_sentiment
# A tibble: 381 x 6 episode_name season index negative positive sentiment
1 S01 E01 - Tourist Trapped Season… 0 10 9 -1 2 S01 E01 - Tourist Trapped Season… 1 12 3 -9 3 S01 E01 - Tourist Trapped Season… 2 10 9 -1 4 S01 E01 - Tourist Trapped Season… 3 14 6 -8 5 S01 E01 - Tourist Trapped Season… 4 10 5 -5 6 S01 E01 - Tourist Trapped Season… 5 13 3 -10 7 S01 E01 - Tourist Trapped Season… 6 7 5 -2 8 S01 E01 - Tourist Trapped Season… 7 9 7 -2 9 S01 E01 - Tourist Trapped Season… 8 1 1 0 10 S01 E02 - The Legend of the G… Season… 0 2 15 13 # … with 371 more rows
Now we can plot these sentiment scores across the plot trajectory of each season.
ggplot(gravity_falls_sentiment, aes(index, sentiment, fill = season)) + geom_bar(stat = "identity", show.legend = FALSE) + facet_wrap(~season, nrow = 3, scales = "free_x", dir = "v") + theme_minimal(base_size = 13) + labs(title = "Sentiment in Gravity Falls", y = "Sentiment") + scale_fill_viridis(end = 0.75, discrete = TRUE) + scale_x_discrete(expand = c(0.02,0)) + theme(strip.text = element_text(hjust = 0)) + theme(strip.text = element_text(face = "italic")) + theme(axis.title.x = element_blank()) + theme(axis.ticks.x = element_blank()) + theme(axis.text.x = element_blank())
Looking at Units Beyond Words
Lots of useful work can be done by tokenizing at the word level, but sometimes it is useful or necessary to look at different units of text. For example, some sentiment analysis algorithms look beyond only unigrams (i.e. single words) to try to understand the sentiment of a sentence as a whole. These algorithms try to understand that I am not having a good day is a negative sentence, not a positive one, because of negation.
gravity_falls_sentences <- gravity_falls_subs %>% group_by(season) %>% unnest_tokens(sentence, text, token = "sentences") %>% ungroup()
Let’s look at just one.
We can use tidy text analysis to ask questions such as: What are the most negative episodes in each of Gravity Falls’s seasons? First, let’s get the list of negative words from the lexicon. Second, let’s make a dataframe of how many words are in each chapter so we can normalize for the length of chapters. Then, let’s find the number of negative words in each chapter and divide by the total words in each chapter. Which chapter has the highest proportion of negative words?
sentiment_negative <- sentiments %>% filter(sentiment == "negative") wordcounts <- gravity_falls_subs_tidy %>% group_by(season, episode) %>% summarize(words = n()) gravity_falls_subs_tidy %>% semi_join(sentiment_negative) %>% group_by(season, episode) %>% summarize(negativewords = n()) %>% left_join(wordcounts, by = c("season", "episode")) %>% mutate(ratio = negativewords/words) %>% top_n(1)
# A tibble: 2 x 5 # Groups: season  season episode negativewords words ratio
1 S01 E14 124 944 0.131 2 S02 E06 129 962 0.134
Networks of Words
Another function in widyr is
pairwise_count, which counts pairs of items that occur together within a group. Let’s count the words that occur together in the lines of the first season.
gravity_falls_words <- gravity_falls_subs_tidy %>% filter(season == "S01") word_cooccurences <- gravity_falls_words %>% pairwise_count(word, linenumber, sort = TRUE) word_cooccurences
# A tibble: 471,288 x 3 item1 item2 n
1 grunkle stan 90 2 stan grunkle 90 3 stan hey 70 4 hey stan 70 5 hey mabel 67 6 mabel hey 67 7 hey dipper 59 8 dipper hey 59 9 mabel dipper 57 10 dipper mabel 57 # … with 471,278 more rows
This can be useful, for example, to plot a network of co-occuring words with the igraph and ggraph packages.
set.seed(1717) word_cooccurences %>% filter(n >= 25) %>% graph_from_data_frame() %>% ggraph(layout = "fr") + geom_edge_link(aes(edge_alpha = n, edge_width = n), edge_colour = "#a8a8a8") + geom_node_point(color = "darkslategray4", size = 8) + geom_node_text(aes(label = name), vjust = 2.2) + ggtitle(expression(paste("Word Network in Gravity Falls's ", italic("Season One")))) + theme_void()