Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

So a post about Lehmann Primality Tests in HackerNews came across my twitter this morning. Seemed like a great quick coding exercise during lunch. Little Did I know…It is wonderfully simple primality test but i think i have hit up against some R quirks. Here is my code for the test (thanks to David Kenal at the aforementioned link of the implementation in Scheme, this is very much his logic):

the basic logic is to take n

1. get a random number a = between 1 and n-1

2. calculate x =(a^(n-1)/2) %% n

3. if x == 0 or x == (-1 %% n) then the number could be prime

The more you repeat these three steps the more certain you are. So you call it recursively. So the function takes to arguments the number is question and the number of iterations(higher the more certain). This implementation seemed to work.

However… I started testing larger primes and things go terribly wrong. you get incorrect an answers and an ominous sounding message:

Warning message:
probable complete loss of accuracy in modulus


I believe the issue comes down to floating point conversions. AKA the first circle of the R inferno. (which is a great read and for a dante fan and a nerd a wonderful collisions of worlds)

say for n<-97

> (y^((n-1)/2))
[1] 2.730792e+64



That is a big Number thanking the mod of it doesn’t go so well. above a certain point the mod function breaks down because of floating point rounding. I don’t know if there is an easy solution but i wanted to share my experiences as and if you are “Midway on our life’s journey, I found myself in dark woods, the right road lost” always look to R inferno for help.

Here is a loop to test:

prime_test<-c(2,3,5,7,11,13,17 ,19,23,29,31,37)

for (i in 1:length(prime_test)) {
print(primeTest(prime_test[i], 50))
}