Yes my method is not intended to be fast for bigger Inputs.
It can also be used with multipliers, like the Lehman method.
I have also implemented this.
Unfortunately if the multipliers are getting bigger, the numbers extend 64 bits.
Unfortunately the mod arguments can not applied to the basic version.
The idea is a little bit related to the sieving in the Quadratic Sieve.
If x^2  n = 0 mod q > (x+aq)^2 n = 0 mod q.
Unfortunately the numbers generated by the error propagation are not soother then random numbers in that area. They just have a higher chance to be a square.
I can prove that the chance of my numbers being a Square is higher by a factor log^2(n) as random numbers of the same size. So theoretical running time can only be n^(1/2)/ log^2(n) with a trial division phase before the error propagation fermat method.
The idea can also be rephrased by a machine learning back propagation approach:
If we have a function f(x) = x^2  n which has an error error to the intended goal (here some square y^2)
we look at the function were the input is corrected by the error :
f(x+error) has a higher chance to hit those kind of numbers y'^2.
I just played with this error shift, (in Excel) and saw that for many (small) numbers it really helped. Then I found that it also works good for multiples of this error.
Maybe there are much more different ways to produce good numbers. It usually is sufficient to look only at even numbers x.
Maybe we can apply this recursive/iterative?
Last fiddled with by ThiloHarich on 20170228 at 13:09
