[This article was first published on data science ish, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
It is a truth universally acknowledged that sentiment analysis is super fun, and Pride and Prejudice is probably my very favorite book in all of literature, so let’s do some Jane Austen natural language processing.
Project Gutenberg makes e-texts available for many, many books, including Pride and Prejudice which is available here. I am using the plain text UTF-8 file available at that link for this analysis. Let’s read the file and get it ready for analysis.
Munge the Data, But ELEGANTLY, As Would Befit Jane Austen
The plain text file has lines that are just over 70 characters long. We can read them in using the readr library, which is super fast and easy to use. Let’s use the skip and n_max options to leave out the Project Gutenberg header and footer information and just get the actual text of the novel. Lines of 70 characters are not really a big enough chunk of text to be useful for my purposes here (that’s not even a tweet!) so let’s use stringr to concatenate these lines in chunks of 10. That gives us sort of paragraph-sized chunks of text.
Maybe you don’t think for loops are elegant, actually, but I could not come up with a way to vectorize this.
Mr. Darcy Delivered His Sentiments in a Manner Little Suited to Recommend Them
I was not sure, when I stopped to think about it, exactly how appropriate this tool is for analyzing 200-year-old text. Language changes over time and from what I can tell, the NRC lexicon is designed and validated to measure the sentiment in contemporary English. It was created via crowdsourcing on Amazon’s Mechanical Turk. However, it doesn’t seem to do badly on Jane Austen’s prose; the sentiment results are about what one would expect compared to a human reading of the meaning. If anything, the text in Pride and Prejudice involves more dramatic vocabulary than a lot of contemporary English prose and it is easier for a tool like the NRC dictionary to pick up on the emotions involved.
So let’s start from a working hypothesis that the NRC lexicon can be applied to this novel and do the sentiment analysis for each chunk of text in our dataframe. At the same time, let’s make a linenumber that counts up through the novel.
Dividing Up the Volumes
Pride and Prejudice contains 61 chapters divided into three volumes; Volume I is Chapters 1-23, Volume II is Chapters 24-42, and Volume III is Chapters 43-61. Let’s find where these breaks between volumes have ended up.
Let’s make a volume factor for the dataframe and then restart the linenumber count at the beginning of each volume.
Positive and Negative Sentiment
First let’s look at the overall postive vs. negative sentiment in the text of Pride and Prejudice before looking at more specific emotions.
Here, each chunk of text has a score for the positive sentiment and the negative sentiment; a given chunk of text could have high scores for both, low scores for both, or any combination thereof. I have made the sign of the negative sentiment negative for plotting purposes. Let’s make a dataframe of some important events in the novel to annotate the plots; I found the chapters for these events and matched them up to the correct volumes and line numbers.
Now let’s plot the positive and negative sentiment.
Narrative time runs along the x-axis. Volume II is the shortest of the three parts of the novel. We can see here that the positive sentiment scores are overall much higher than the negative sentiment, which makes sense for Jane Austen’s writing style. We can see some more strongly negative sentiment when Mr. Darcy proposes for the first time and when Lydia elopes. Let’s try visualizing these same data with a bar chart instead of points.
I like certain aspects of both of these styles of plots. What do you think? Is one of these clearer or more appealing to you?
Fourier Transform Time
The previous plots showed both the positive and negative sentiment, but we could also take each chunk of text and assign one value, the positive sentiment minus the negative sentiment for an overall sense of the emotional content of the text. Let’s do that for a new view of the novel’s content.
Now let’s plot this single measure of the sentiment in the novel.
To better see the overall trajectory of the narrative, we can filter and transform these sentiment scores using a low-pass filter Fourier transform. Matthew Jockers, the author of the syuzhet package, describes this in more detail here.
Now, I am a little rusty on the Fourier transform. I haven’t thought much about it since I was a physics undergrad taking an electronics lab; I vaguely remember that I made a square wave by adding up a bunch of sine waves. In the case here with text from a novel, the sentiment scores are the time domain signal. Taking the Fourier transform finds the set of sinusoidal functions to sum up to represent the time domain signal. Thus, the Fourier transform shows us where the narrative sentiment is positive/negative, and the low-pass filter allows us to see the overall structure in the narrative (i.e. low frequency structure) while filtering out high frequency information. We would just have to decide how many components to keep for the low-pass filtering.
This probably jumps out as pretty obvious, but the values have been scaled and centered here to show the narrative shape. The raw sentiment scores were all mostly positive in Pride and Prejudice but the filtered and transformed sentiment scores have been scaled and centered to visualize the overall structure of the narrative. Notice the important events that correspond to the max/min in the transformed and filtered sentiment score. I am just delighted about that. Math! It is the best. I do want to be careful not to overemphasize that result just now, though, because it depends on how many Fourier components we keep during the low-pass filtering. This plot is made by keeping 3 components, the default in the syuzhet package; the shape will look a little different with more small-scale (i.e., higher frequency) structure if we keep 4 or 5 components and the important plot events may not align quite as perfectly with a maximum, for example. I would like to explore this point more.
The NRC lexicon includes scores for eight emotions, along with the overall positive and negative sentiment scores. Let’s see how these emotion scores change during the novel. We will need bigger chunks of text to make reasonable looking plots, so let’s go back and concatenate our chunks into bits that are five times larger. (The last chunk will be a bit shorter because it doesn’t come out exactly even.)
Now let’s find the sentiment scores, divide between the three volumes of the novel, and melt for plotting.
Let’s capitalize the names of the emotions for plotting, and also let’s reorder the factor so that more postive emotions are together in the plot and more negative emotions are together in the plot.
For plotting the emotions, let’s make heat maps in the style of Bob Rudis. When I saw him put some examples of these heat maps on Twitter, I just knew that I needed to make some.
Oh, they’re so pretty… We can see the positive emotions are stronger than the negative ones, which is sensible given Austen’s bright, humorous writing style. The negative emotions are stronger in the middle of Volume II when Mr. Darcy proposes for the first time and near the beginning of Volume III when Lydia elopes.
Wow, this was so much fun, although obviously I have outed myself as a super fan. Good thing I have no shame about that whatsoever. The Fourier transformed sentiment values were so interesting, and are perfect for comparing across different texts. I am eager to try that out on some different novels. Boy, I just love that we can do MATH on WORDS; those are two of my very favorite things. The R Markdown file used to make this blog post is available here. I am very happy to hear feedback or questions!
To leave a comment for the author, please follow the link and comment on their blog: data science ish.