If the typing monkeys have met Mr Markov: probabilities of spelling "omglolbbq" after the digitial monkeys have read Dracula
[This article was first published on Computational Biology Blog in fasta format, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
On the weekend, randomly after watching Catching Fire, I remember the problem of the typing monkeys (Infinite monkey theorem) in which basically could be defined as (Thanks to Wiki):Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
# *******************
# INTRODUCTION
# *******************
The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare.
There is a straightforward proof of this theorem. As an introduction, recall that if two events are statistically independent, then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain in Moscow on a particular day in the future is 0.4 and the chance of an earthquake in San Francisco on that same day is 0.00003, then the chance of both happening on that day is 0.4 * 0.00003 = 0.000012, assuming that they are indeed independent.
Suppose the typewriter has 50 keys, and the word to be typed is banana. If the keys are pressed randomly and independently, it means that each key has an equal chance of being pressed. Then, the chance that the first letter typed is ‘b’ is 1/50, and the chance that the second letter typed is a is also 1/50, and so on. Therefore, the chance of the first six letters spelling banana is
less than one in 15 billion, but not zero, hence a possible outcome.
# *******************
# METHODS
# *******************
In my implementation, I will only consider 26 characters of the alphabet (from a to z, excluding the whitespace). The real question I would like to ask is the following:
Given a target word, say “banana”, how many monkeys would be needed to have at least one successful event (a monkey typed the target) after the monkeys have typed 6 characters.
To solve this, first calculate the probability of typing the word banana:
Now, just compute the number of monkeys that might be needed:
The model that assigns the same probability for each character is labeled as “uniform model” in my simulation.
My goal is to optimize n (minimize the number of monkeys needed because I am on a tight budget). So I decided to use a Markov Chain model of order 1 to do so. If you are unfamiliar with Markov Chains here is a very nice explanation of the models here.
The training set of the emission probability matrix, consist on a parsed version of Dracula (chapters 1 to 3, no punctuation signs, lowercase characters only)
The emission probability matrix of the Markov Chain ensures that the transition from one character to another character is constrained by previous character and this relation is weighted based on the frequencies obtained in the training text.
It is like having a keyboard with lights for each key, after “a” is pressed, the light intensity of each key would be proportional of what characters are more likely to appear after an “a”. For example “b” would have more light than “a”, because it is more common to find words having *a-b* than *a-a*.
# *******************
# RESULTS
# *******************
1) Plot the distribution of characters in the uniform model
Distribution of characters after 10,000 iterations using the uniform model
2) Plot the emission matrices
A) As expected, the transition from one character to another character is constrained by previous character and this relation is weighted based on the frequencies obtained in the training text. B) in the uniform model each character has the same probability to be typed and does not depend on the previous character.
3) Compare the performance of the two models
In this plot I am comparing the number of monkeys (log10(x)) required to type the target words (indicated in red text) using the Markov Chain model and the uniform model. In general the Markov Chain model requires less monkeys in words that are likely to appear in the training set, like “by”, “the”, “what” , “where” and “Dracula”. On the other hand, words that only have one character like “a”, given that there’s no prior information the models perform equally. Now another interesting example is the word “linux”, in which is not very likely to appear in the training set and therefore the models perform likely equally. The extreme case example is the word “omglolbbq”, in which the Markov Chain model performs worse than the uniform model due of the very low probability of this word to happen, so it is penalized and I will need more monkeys to get this target word
# *******************
# SOURCE AND FILES
# *******************
Source and files
Benjamin
To leave a comment for the author, please follow the link and comment on their blog: Computational Biology Blog in fasta format.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.