After a small frustration with the limitations of R 16-bit intergers. I think i have worked out a solution. The issue was i was implementing a bone-headed solution to the modulo/exponent step. So to recap. The Lehmann primality test is a probabilistic prime test. I got the implementation from this article http://davidkendal.net/articles/2011/12/lehmann-primality-test
the basic logic is to take n
1. get a random number a = between 1 and n-1
2. calculate x =(a^(n-1)/2) %% n
3. if x == 0 or x == (-1 %% n) then the number could be prime
The issue happened when calculating x =(a^(n-1)/2) %% n. For even n < 100 the number in this function gets big. So big that the limitation on R integer lengths rounds the number and then the modulo function is meaningless. The way to fix this is to use modular exponentiation. (Thanks to stackoverflow for bringing up the solution). This allows us to iteratively multiply the base by the exponent and store the modulo for every step. By doing this we never have to deal with a number so large as to screw with the R limitations.There are some clever highly efficient modular exponentiation algorithms but i am no computer scientist so we will use the simplest(modular exponentiation with modular multiplication)
Here is the full solution:
This type of iterative exponentiation is implemented in Python under pow() so a python implementation is quite easy to build as well. Enjoy. R can get frustrating but a little searching around can get you a far way.