I am currently doing an internship in England. Therefore, I keep alternating between French and English in my different emails and other forms of communication on the Internet. I have been surprised to see that some websites are able to recognize when I use French or when I use English. For example, Facebook automatically proposes me to translate. I was really amazed by this ability, how can a computer know what language I am using? Especially, I use a QWERTY keyboard, which means without accent. Therefore, I don’t write in a proper French and I never use accents.
I remembered some courses of data analysis and of computer science where the issue was to determine the group whom an individual belonged to.
If I choose a text, how can I make my computer know if it’s an English text or a French text without accent? Here we try to differentiate without accent since it would be a very (too much) easy way to do it. In particular, how can I do if I can’t have access to French and English dictionaries?
We assume we have a sample of French and English texts which will be used as benchmarks. We call frenchText(i) the i-th French text we have, and englishText(j) the j-th English text we have. The aim is to determine if the text called TEXT is French or English.
For every text, we count the proportion of every letter. We only take into account the 26 alphabet letters in order to keep the program as simple as possible. Then we compute the average for all the French texts and for all the English texts. In such a way we obtain the normal proportion of each letter in an “average” French (or English) text.
Then, we count the proportion of occurrence of the letters in TEXT and we compute the Euclidean distance between TEXT and the average French text (we call this distance d(TEXT, averageF)) and between TEXT and the average English text (we call this distance d(TEXT, averageE)). If d(TEXT, averageF) is greater than d(TEXT, averageE) we consider that TEXT is more similar to the English texts and therefore is certainly English. Respectively, if d(TEXT, averageF) < d(TEXT, averageE), we consider the text to be French.
The Euclidean distance between two vectors p and q of dimension n.
What is interesting is that, once we have considered a first TEXT, we can, according to the decision we made, take into account TEXT in the calculation of the average letter occurrences in the French texts (respectively English texts), if it has been considered as a French text (resp. as an English text). And then we can use this more accurate average to determine the language of the new text. This is why it is called statistical learning. The more text we compute, the more likely the decision is right.
It would be interesting to use a weighted average in order to take into account the probability of the event [TEXT is French] (resp. [TEXT is English]).
For the initialization, I have used two French texts and two English texts. Each text is small (about 100 words).
Then I have tested the program on different French and English texts. The results are relevant with the languages of the texts.
If you want to run this program, you will need the files .txt. You can find them on my Google page : https://sites.google.com/site/probaperceptionstock/, the file is a ZIP called textPost5. You can unzip and save this file on your computer in any folder you want.
This method must be use carefully, if the first tests on the first texts are wrong, then, the results are likely to be inconsistent. Indeed, a wrong proportion of letters has consequences on the determination of a text.
Besides, we use the Euclidean distance in order to make a clean and simple presentation. However, it is far from being the best distance for such a problem. Data analysis methods, may well be more relevant. For example, as I explain in a previous post (http://probaperception.blogspot.co.uk/2012/09/v-behaviorurldefaultvmlo.html) the Mahalanobis distance could be interesting in this context.
The code (R):
setwd(“U:/Blog/Post5”) #you will have to change this directory according to your own folder