View Single Post
Old 2017-02-27, 22:49   #6
danaj
 
"Dana Jacobsen"
Feb 2011
Bangkok, TH

11100011002 Posts
Default

Quote:
Originally Posted by mahbel View Post
the problem with Fermat factoring method is that it is just one of what I call sequential methods. You are looking for a square to add to your number to get another square. And because we have no way of knowing where to start, we end up having to try one square after another another. So they can't be made efficient. Under certain circumstances, it can worse than using simple division. Try using Fermat method for say 13*643 and see how fast it converges to the solution. Then try division by primes below sqrt(13*643).
Try it using the similarly sized 89*97 and you will reach the completely opposite conclusion vs. trial division. There are also the variants which have better complexities than trial division, e.g. Lehman O(n^1/3) and McKee O(n^1/4).

How about:
Code:
185486767418172501041516225455805768237366368964328490571098416064672288855543059138404131637447372942151236559829709849969346650897776687202384767704706338162219624578777915220190863619885201763980069247978050169295918863
which was not a number I created, but was supplied by someone as what they considered to be a hard to factor number. It's the product of two large "random" primes, after all. Turns out it's ridiculously easy for Hart's OLF, which is a Fermat variant. Of course this is not going to be the case for properly formed keys, but there are reasons not to be quite so dismissive of their having a place in the toolbox.

I do agree that once we've gotten past some special inputs and out of the 64-bit range (which for this forum would be considered tiny numbers), Fermat, HOLF, Lehman, SQUFOF, etc. have little use other than optimizing factoring tiny inputs (e.g. small co-factors or to support other algorithms that need factoring of these tiny sizes).
danaj is offline   Reply With Quote