How are the large factors more than 100 digits found?
Like this [URL="https://www.mersenne.ca/exponent/1171"]128digit factor of M1171[/URL], or [URL="https://www.mersenne.ca/exponent/1193"]this 104digit factor of M1193[/URL]. If the list [url]https://members.loria.fr/PZimmermann/records/top50.html[/url] is up to date, then the largest factor found using ECM has just 83 digits. How are these factors with more than 100 digits found?

These are found by [URL="https://en.wikipedia.org/wiki/General_number_field_sieve"]NFS method[/URL].
Mersenne numbers are a subset of the [URL="https://homes.cerias.purdue.edu/~ssw/cun/"]Cunningham project[/URL], where the exact method for factoring specific numbers is listed. 
[QUOTE=Batalov;555050]These are found by [URL="https://en.wikipedia.org/wiki/General_number_field_sieve"]NFS method[/URL].
[/QUOTE] I see, thank you. For a given number, at what level shall we stop using ECM and switch to NFS? 
[QUOTE=Ensigm;555053]I see, thank you. For a given number, at what level shall we stop using ECM and switch to NFS?[/QUOTE]
There are different heuristics, they depend on a lot of factors/considerations. Usually. people prefactor with TF, ECM etc until X% of the SNFScomplexity, or of GNFSsize. (where [I]X[/I] varies between different opinions; search the forum, as well as SSW webpage and mathworld and similar). The problem in a nutshell is: 1) when factor is already known  you can easily look retrospectively and make a conclusion: "We would have needed [I]N[/I] times _more_ time to keep ECMming and we would have found it ..or not" (and we could even arrive at paradoxical conslusion: "we would have saved so much time by not ECMming, skipping it altogether"); 2) we don't know the factor. We ecm up to the point when time to factor by NFS becomes less than the estimated time to get lucky with ECM (and factoring in probability that maybe it is out of reach of ECM). 
[QUOTE=Ensigm;555053]I see, thank you. For a given number, at what level shall we stop using ECM and switch to NFS?[/QUOTE]
There are two flavors of NFS: Special, abbreviated SNFS, and general GNFS. For numbers where SNFS is faster, a rule of thumb is to ECM up to 0.22 * the difficulty of the number; e.g. SNFS300 would get ECM up to 66 digits (about 1.4 * T65). For numbers where GNFS is faster or SNFS is not possible, a rule of thumb is ECM to 0.31 * the number of digits; a GNFS200 would get 62 digits (2 * T60, or a T60 plus a T60 worth of T65sized curves). These rules of thumb are not super accurate; to be precise, one gives up on ECM when the marginal next set of ECM curves has less chance to factor the number than the fraction of expected NFSsolving time the set of curves takes. That calculation is tricky, to say the least, so the rules of thumb above exist; both my stated ratios are on the low side, many people do a bit more ECM than indicated here. 
[QUOTE=Ensigm;555053]I see, thank you. For a given number, at what level shall we stop using ECM and switch to NFS?[/QUOTE]
If your interest is limited to factoring Mersenne numbers, then the practical advice is probably: never. Unless you represent a small government or large research collaboration? Only they have the resources needed to approach the smallest unfactored Mersenne number by NFS. If you are asking in general then Batalov and VBCurtis have good advice. 
All times are UTC. The time now is 22:47. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.