20170226, 20:16  #1 
Nov 2005
1100101_{2} Posts 
A simple idea for factoring numbers
While palying with the fremat equation i had an idea which might be used for finding integer factors of a product. The idea is to write the fermat equation in a different way. Instead of having a sum of two squares and the number n in an equation we have an equation which exists out of four products. The size of the numbers in the equations is reduced. Unfortunately I was not able to come up with an algorithm that works for general numbers with a guaranteed speedup. Maybe you can help develop such an algorithm or by preventing myself from putting much effort in a stupid idea.
It is based on a different representation of the fermat equation q*p = n = x^2  y^2 We want to solve an equation of the form: n = p*q = (x+y) * (xy)= x^2  y^2 (1) It can also be written as x^2  n = y^2 (2) This gives a factorization of n. If we already have a relation of the form x^2  n = y^2 + e (3) we try to get a new equation out of this without the error e, such that we get an equation like we have it in (2). Out of equation (3) we construct the following equation: (x+ae)^2  n = y'^2 + e' (4) here we hope to get e' = 0 in order to we have a solution for (2). a is an integer or at least a*e is an integer. Why should this work? If we take equation (4) with e’=0 and subtract equation (3) from it we get: (x+ae)^2  n = y'^2 (5)  (x^2  n = y^2 + e )  (x+ae)^2  n  x^2 + n= y'^2  y^2  e x^2+2xae + (ae)^2  n  x^2 + n = y'^2  y^2  e 2xae + (ae)^2 + e = y'^2  y^2 e*(2ax + a^2e +1) = y'^2  y^2 e*(2ax + a^2e +1) = k(2y+k) (6) Basically two things have happened here  The probability to fulfil this new equation is higher as a fulfilling the original equation  From one equation for one x we can jump to an other solution which is far away Concerning 1) Both equations refer to the same solution, but equation 6 has a better chance to be fulfilled since: If q and p are primes, there is only one solution y’. But there might be many splits of y’ in y+k and k such that k and y fulfill some conditions In equation 5 the numbers on the right are very specific, since they are squares. In equation 6 they are products. The size of the numbers on both sides are reduced Both sides have (nearly) the same structure So we can extend the regular factorisation method from fermat by an additional step. When checking if x^2  n is a square we also check if (x+e)^2  n is a square, where e = x^2  n  ceil(x^2  n)^2. Examples: n=32003*16001 x=22632 = ceil(Sqrt(n)) + 2 // only two steps in the regular fermat method x^2  n = 356^2  685 y = floor(sqrt(x^2n)) = 356 e = 685 = 137 * 5 we choose a=2 e * (2ax + a^2e  1) = 137*5 * (2*2* 22632 + 4* 685 +1) = 137*5 * 93269 = 137*5 *11 * 61 * 139 choose k= 5*11*139 = 7645 k+2y = 7645 + 2*356 = 8357 = 137*61 i.e. x+a*e = 22632 + 2*685 = 24002 and y’=y+k = 356 + 7645 = 8001 is a solution. instead of searching a*e = 2*685 = 1370 steps with the regular fermat method, we only need 2 steps to get to the solution. Example a=1 n=10037 * 4339 Y’ = 7188 x= 6627 , s=27 e=561 = 3* 11* 17 2x + e + 1 = 2*6627 + 1 + 561 =(2x+1) + e = 5*11*241  3* 11* 17 = 2^3* 11 * 157 y=605 k= 2849  605 = 2244 = 2*2*3*11*17 2y+k = 2*11*157 a=4 n=10037 * 4339 Y’ = 7188 x= 6600 e=147 = 3* 7 * 7 a(2x + a*e)  1 = 4(2x + 4*e)  1 = = 8*6600 + 16*147  1 = 52800 + 2351 = 131* 421 y=605 k= 2849  98= 2751=3* 7*131 2y+k = 98*2+ 2751= 7* 421 A simple (java) algorithm using the facts from above with a=1 is this: public long findFactor(long n) { double sqrt = Math.sqrt(n); long x = (int) Math.ceil(sqrt); while (true) { long right = x * x  n; long y = (int) Math.ceil(Math.sqrt(right)); long error = y * y  right; long x2 = x + error; long y2 = x2 * x2  n; double sqrtY2=Math.sqrt(y2); if (sqrtY2 == (long) sqrtY2) { return (long) (sqrtY2 + x2); } x+=1; } } The algorithms always finds a solution (for odd factors). Even when the facts above does not apply. Then the error is 0, and the solution is found by the original fermat method. In this case we have to apply the expensive calculation of the square root twice. This means the algorithm takes twice the time compared to the original Fermat method. There are some more slight improvements, but I could not find an algorithm which provides a speedup for each number compared to the original fermat method. 
20170226, 21:48  #2 
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
10100111100100_{2} Posts 
Given that the Fermat method is already appallingly slow ...

20170227, 14:39  #3 
Feb 2017
Nowhere
4,591 Posts 
This reminds me of something I read lo, these many years ago in
Recreations in the Theory of Numbers The Queen of Mathematics Entertains by ALBERT H. BEILER which (three cheers for the Information Age!) may be found here In the chapter "Resolution" on factoring we find the following example: Since any odd integer in excess of unity can be represented as the difference of two squares, we must ultimately arrive at a square if we add squares to an unknown odd number. But if the unknown should happen to be a prime, then it may take many steps before the sum is a square so that this method had best be abandoned if results are not obtained within a reasonable number of trials. Thus, even for so small a number as 9839, one would have to exhaust almost half of Barlow's table of squares before finding that 9839+4919^{2} = 4920^{2} or 9839 = (4920+4919)(49204919) = 9839·1, and that consequently 9839 is a prime. OK, this is slightly cruder than Fermat's method (which is described just before), but only slightly. These simple methods become prohibitively laborious even for small numbers. 
20170227, 18:04  #4 
"Dana Jacobsen"
Feb 2011
Bangkok, TH
2^{2}·227 Posts 
While it isn't so useful for very large inputs, there is still a use in implementations for fast methods on small inputs. E.g. yafu uses a heavily tuned FermatLehman for numbers under ~42 bits and SQUFOF for larger 64bit inputs (this used to be true, I haven't checked in a few years). I use Hart's OLF to factor composites of 2228 bits. Microoptimizations make a big difference at this level.
Like your method, Hart's OLF uses an integer square root and a perfect square test for each loop. Note that the latter is much faster than a square root in most cases and the performance of your perfect square detection method will make a noticeable difference for algorithms like SQUFOF. FermatLehman: http://www.ams.org/journals/mcom/197...632/home.html Hart's OLF: http://wstein.org/home/wstein/www/ho...linefactor.pdf 
20170227, 20:36  #5 
"mahfoud belhadj"
Feb 2017
Kitchener, Ontario
60_{10} Posts 
the problem with Fermat factoring method is that it is just one of what I call sequential methods. You are looking for a square to add to your number to get another square. And because we have no way of knowing where to start, we end up having to try one square after another another. So they can't be made efficient. Under certain circumstances, it can worse than using simple division. Try using Fermat method for say 13*643 and see how fast it converges to the solution. Then try division by primes below sqrt(13*643).

20170227, 22:49  #6  
"Dana Jacobsen"
Feb 2011
Bangkok, TH
2^{2}×227 Posts 
Quote:
How about: Code:
185486767418172501041516225455805768237366368964328490571098416064672288855543059138404131637447372942151236559829709849969346650897776687202384767704706338162219624578777915220190863619885201763980069247978050169295918863 I do agree that once we've gotten past some special inputs and out of the 64bit range (which for this forum would be considered tiny numbers), Fermat, HOLF, Lehman, SQUFOF, etc. have little use other than optimizing factoring tiny inputs (e.g. small cofactors or to support other algorithms that need factoring of these tiny sizes). 

20170227, 23:12  #7  
"Forget I exist"
Jul 2009
Dumbassville
10000011000000_{2} Posts 
Quote:
0,1,4,1,0,1,4,1,0 so you know you are looking for an odd square and the sum will have to be an even square. 

20170228, 00:46  #8  
"mahfoud belhadj"
Feb 2017
Kitchener, Ontario
111100_{2} Posts 
Quote:
N+9k^2=(10+3m)^2. For example, if N=7*13=91, then 91+3^2=10^2 so that N=(103)*(10+3)=7*13. The other simple improvement is to consider the sum of digits of N. The square (10+3m)^2 must have the same sum of digits as N so there is really no point considering squares that have a different sum of digits if we were looking for 9k^2=(10+3m)^2N. For example, if N=7*19=133, the sum of digits is 7 so we only need to consider (10+3m)^2 with the same sum of digits 7 and the first candidate is 13^2 happens to be the right one since 13^2133=36=9*2^2. But even with these few improvements, Fermat's method is still slow because it is sequential, that is you need to consider every square of the form 9k^2 is you are adding it to N to get a square and every square (10+3m)^2 with the right sum of digits if you are subtracting N from it to get a square and the one you skip may be the one you need. 

20170228, 01:24  #9  
Aug 2006
3×1,993 Posts 
Quote:


20170228, 12:33  #10 
Nov 2005
101 Posts 
Yes my method is not intended to be fast for bigger Inputs.
It can also be used with multipliers, like the Lehman method. I have also implemented this. Unfortunately if the multipliers are getting bigger, the numbers extend 64 bits. Unfortunately the mod arguments can not applied to the basic version. The idea is a little bit related to the sieving in the Quadratic Sieve. If x^2  n = 0 mod q > (x+aq)^2 n = 0 mod q. Unfortunately the numbers generated by the error propagation are not soother then random numbers in that area. They just have a higher chance to be a square. I can prove that the chance of my numbers being a Square is higher by a factor log^2(n) as random numbers of the same size. So theoretical running time can only be n^(1/2)/ log^2(n) with a trial division phase before the error propagation fermat method. The idea can also be rephrased by a machine learning back propagation approach: If we have a function f(x) = x^2  n which has an error error to the intended goal (here some square y^2) we look at the function were the input is corrected by the error : f(x+error) has a higher chance to hit those kind of numbers y'^2. I just played with this error shift, (in Excel) and saw that for many (small) numbers it really helped. Then I found that it also works good for multiples of this error. Maybe there are much more different ways to produce good numbers. It usually is sufficient to look only at even numbers x. Maybe we can apply this recursive/iterative? Last fiddled with by ThiloHarich on 20170228 at 13:09 
20170228, 14:48  #11 
Feb 2017
Nowhere
4,591 Posts 
danaj wrote:
Hart's OLF: http://wstein.org/home/wstein/www/ho...linefactor.pdf I got an error 404 on the link. I believe I found the paper here. It lays out nicely the circumstances under which it and related methods are likely to be effective. If you want to make sure your RSA cipher isn't vulnerable, it might be prudent to try these methods on your "hard to factor" number, just to make sure they don't give quick results. After all, someone out there might try it (see danaj's example). It's amazing what embarrassingly simple methods might work to crack RSA ciphers, simply because the people producing the keys failed to realize that someone might try them (or failed to heed standards based on the idea that someone might). One of my favorites was mentioned in this very forum, here. Last fiddled with by Dr Sardonicus on 20170228 at 14:52 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Windows 10 in Ubuntu, good idea, bad idea, or...?  jasong  jasong  8  20170407 00:23 
Simple Script to get Trial Factoring Work  jfamestad  PrimeNet  3  20161106 20:32 
Simple factoring challenge + questions  siegert81  Factoring  12  20160528 18:36 
This simple algorithm incomplete can only calculate prime numbers?  Ale  Miscellaneous Math  38  20151129 23:27 
Simple Idea for SIQS  ThiloHarich  Factoring  5  20060316 08:22 