A Neural Network learns to talk like Michael Scott

[This article was first published on Just R Things, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

(Read Time: ~5-6 minutes. Image from tumblr.com)

I want to start off this post by saying that this post was made possible thanks to Max Woolf (Twitter: @minimaxir) and his Python package textgenrnn, a text generating recurrent neural network that, when trained, can predict the next character (or word) based on a number of previous characters (or words). He also has a great tutorial on how to implement textgenrnn on actual text data. Many thanks!

Bot or Not: A Michael Scott Quote-Generating Bot

“It is so nice to be back in a country that has movies.”

“I want you to rub butter on my foot.”

“Gruel. Sandwiches. Gruel omelettes. Nothing but gruel. Plus, you can eat your own hair.”

“I’m not going to settle for a turtle.”

“Are you wearing a new bag?”

“Don’t mind if I do. See you in a big package in my office.”

If you are as big of a fan of The Office as I am, you may have recognized that the first three are real quotes from Michael Scott, while the last three are not actual quotes. The last three quotes were generated by a bot. More specifically, it was generated by a Recurrent Neural Network. Using all Michael Scott quotes from the show as input, the model was trained to learn the speech pattern (i.e. sequence of words) of Michael Scott, and was then used to generate these new quotes, by predicting the words that come next in the sequence based on previous words.

Despite the name of my blog, this Neural Network model was trained in Python, not R (if you came here expecting at least some R, here is a web scraper I wrote in R to scrape The Office quotes from this website). You can also find the scraped quotes here.

I used textgenrnn, a Python package written by Max Woolf. It is an implementation of a multi-layer recurrent neural network called char-rnn (by Andrej Karpathy), with some added functionalities added by Max. These include word-level model training and text generation, the ability to train models on a GPU, the ability to alter the number of previous characters/words used to predict the next character/word, etc, in addition to the typical hyper-parameters of a neural network such as number of layers and number of neurons in each layer.

Max also has a great tutorial on how to implement his package textgenrnn on text data, which I largely relied on. You can build your own textgenrnn model in a Jupyter Notebook using Google’s Colaboratory, with an option to use a GPU during run time to substantially speed up the training time. The best part, it’s completely FREE.

Text generating bots are seldom perfectly human-like, and are typically known to produce weird and nonsensical results (a great blog that has many amusing applications of Neural Networks generating text outputs is Janelle Shane’s blog aiweirdness, which I highly recommend to everyone for some good fun).

This makes Michael Scott a perfect subject for this model, since a lot of what Michael says in the show is nonsensical to begin with.

tumblr.com

The Birth of Michael Scott Bot

The fun part about training this model was watching the model come to life. It is almost as if I was watching an infant learn the miracle of speech, except in this case, a copy of Michael Scott.

The model was trained at the word-level (to keep the training time short), with a 4-layer by 128 neuron network architecture, with a max length of 10 (i.e. the number of words used to predict the next word in the sequence). In hindsight, I probably should have used a smaller architecture for training a model at the word level (to prevent overfitting).

Here is the parameter configuration for the model training:

model_cfg = {
'rnn_size': 128,
'rnn_layers': 4,
'rnn_bidirectional': True,
'max_length': 10,
'max_words': 10000,
'dim_embeddings': 100,
'word_level': True
}

train_cfg = {
'line_delimited': True,
'num_epochs': 100,
'gen_epochs': 2,
'batch_size': 1024,
'train_size': .8,
'dropout': 0.2,
'max_gen_length': 300,
'validation': True,
'is_csv': False
}

I trained the model over 100 Epochs (number of full passes through the entire data), which when trained on the GPU in Colaboratory, took less than 30 minutes (it would take many-folds as long if trained on a CPU). Let’s see how this bot developed during its training phase.

scott
weheartit.com

At Epoch 2/100, Michael Bot is discovering itself. He is discovering what he is, and moreover, what he is not…

Epoch 2/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">207ms</span></span>/step - loss: 4.1440 - val_loss: 4.2453
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> ' m not just . . . . .

i ' m not going to be a little bit of the company .

i am going to be a lot of the party .

####################
Temperature: 0.5
####################
i ' m not going to know .

yeah , no , that ' s not . i am gonna tell you .

ok , i am not going to me .

At Epoch 4/100, he is still self-discovering – what he’s going to be, or not to be.

Epoch 4/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">207ms</span></span>/step - loss: 3.5599 - val_loss: 4.1743
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> ' m not gonna go to the company .

i ' m not gonna be a little overwhelmed .

i am not going to be a good to the place .

####################
Temperature: 0.5
####################
no , no . no , no . i am going to be a little something . d

you know what ? i ' m gonna do you .

i ' m not gonna be a baby .

At Epoch 8/100, Michael Bot is going through some sort of existential crisis while discovering his purpose, and he seems to know something but chooses to live in secrecy.

Epoch 8/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">201ms</span></span>/step - loss: 2.6478 - val_loss: 4.4112
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> ' m not going to tell you anything .

i ' m not going to kill myself .

i ' m not going to tell you something .

At Epoch 20/100, he seem to have gotten in stuck in some infinite loop of “not knowing”. Perhaps he’d discovered that there is so much in the world that he does not know.

Epoch 20/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">201ms</span></span>/step - loss: 1.2720 - val_loss: 5.3723
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> don ' t know .

i don ' t know . i don ' t know . i don ' t know .

i don ' t know .

Skipping ahead, at Epoch 50/100, Michael Bot is learning to put some more words together in a somewhat appropriate sequence. He is even beginning to ask questions in a conversational manner.

Epoch 50/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">205ms</span></span>/step - loss: 0.6140 - val_loss: 6.6774
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> ' m just calling because you responded positively to the best christmas party i supposed to talk to sales , because i ' d like to make out your reservation and then you could just came up with a bunch of different people ?

i ' m not going to make this one .

no . . . you ' re not going to toss the party .

####################
Temperature: 0.5
####################
i ' m not going to make this one .

i think you knew exactly what the craziest part of this is ? she is what i ' d do with his child of his own rules . like me , the real surprise that i give you , too . you know , give yourself <span class="hiddenGrammarError">a exam</span> of money . thank you um , i want you to give him your undivided attention .

oh , god . oh , my god ! i don ' t really get this to ruin her honeymoon .

####################
Temperature: 1.0
####################
i would love that .

<span class="hiddenSpellError">hi</span> . how are you ?

what time is wrong with you people ?

At Epoch 68/100, it seems as if Michael Bot discovered that he is not human but rather a mere robot. Sort of like the scarecrow in the Wizard of Oz, except with a cynical realization that he in fact won’t get a brain.

Epoch 68/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">205ms</span></span>/step - loss: 0.5380 - val_loss: 6.8999
####################
Temperature: 0.2
####################
<span class="hiddenGrammarError">i</span> ' m not going get a brain , got it .

i ' m not going get a brain , got it .

i ' m not going get a brain , got it .

At Epoch 88/100, Michael Bot has become even more conversational with longer sentences.

Epoch 88/100
103/103 [==============================] - 21s <span class="mceItemHidden"><span class="hiddenSpellError">199ms</span></span>/step - loss: 0.4863 - val_loss: 7.0063
####################
Temperature: 0.5
####################
<span class="hiddenGrammarError">i</span> ' m not gonna go back and ask me something <span class="hiddenSpellError">charles</span> . and i promised that you know why , i think we can do this .

i don ' t know . i saw this thing about oscar . and i was just like you to look into this ? do you accept the game ?

hey . <span class="hiddenSpellError">stanley</span> ? how so ya doing ?

By the end, the training loss has become significantly smaller than the validation loss, suggesting that the model is suffering from overfitting. As a result, many of the quotes generated by the bot were very similar to the original quotes from the show (sometimes exactly the same). Nevertheless, the model still managed to generate some humorous quotes, often switching out just a few words from existing quotes.

Michael Scott Bot Generated Quotes:

no . i think they were very disrespectful to me . you are the complete package , and i am a big man .

oh ! i am so sorry that phyllis hates you . and hates your high . and i have something . .

everybody , david wallace and i have talked and everybody into the pool , and we sell ?

i know , i know . well , i ‘ ll land on my feet into a little jerk .

yeah , i asked for pickles with my car .

uh , yeah . i also saved her life . i was being groomed .

oh do not raise your mind .

i ‘ m just gonna somersault .

I also trained a Dwight Schrute Bot with the same architecture. Dwight was also a great subject for this because of his bluntness and his unique set of vocabulary:

a single piece brothel calves , jim . i know politics tradition

erin , back row . ryan . jujitsu .

i didn ‘ t sucker punch you , pale , gross mean .

no , no , no , no . i got a hobbit .

maybe you and charles should kick the paper ball .

you ‘ re right . mercy , you ‘ re too stupid .

i had a barber once who used to comb my force of heday .

Finally, I trained a Jim Halpert Bot, which proved to be more difficult to generate new and unique quotes than a Michael or Dwight bot, presumably because Jim tends to speak like a normal person in comparison. Here are a couple of examples however, side-by-side with the original quote it most likely played off of:

Original Quote: “Ooh, I like where this is going. Unfortunately I have a lot of work today so Im gonna have to hand this off to my number two. But, dont worry, hes the best in the biz.”

Jim Bot: “ooh , i like where this is going . unfortunately i have a meeting ?”

Original Quote: “It is transfer, extension, and then transfer again.”

Jim Bot: “it is transfer , extension , and it ‘ s impossible .”

And just like that, one can easily build a text-generating bot in Python using textgenrnn. I hope you enjoyed this post, please leave a comment if you would like to see more posts like this!

To leave a comment for the author, please follow the link and comment on their blog: Just R Things.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)