mersenneforum.org New 70 digit factor
 Register FAQ Search Today's Posts Mark Forums Read

 2010-11-02, 13:18 #1 R.D. Silverman     Nov 2003 22·5·373 Posts New 70 digit factor Lenstra et.al. just announced finding a 70 digit factor of 2^1237 - 1. This lies outside of the current Cunningham table. I wish they would make a pass at the Cunningham 2+ numbers.......
 2010-11-02, 14:54 #2 Mini-Geek Account Deleted     "Tim Sorbera" Aug 2006 San Antonio, TX USA 10AB16 Posts It is listed in the Factor DB. It is 2538207129840687799335203259492870476186248896616401346500027311795983. The cofactor is 303 digits and is composite. Congratulations to all involved for a huge factor! It is the third largest ECM factor yet.
 2010-11-02, 15:07 #3 Raman Noodles     "Mr. Tuch" Dec 2007 Chennai, India 3·419 Posts That was my favourite number, man! p70 = 2538207129840687799335203259492870476186248896616401346500027311795983 from M1237 by Lenstra et. al. by using ECM?, rather within that way After M1061, this was the smallest Mersenne number with no known factors at all That Remaining cofactor c303 is still composite, as yet. enough May be that it was based upon my suggestion, atleast? M1277 is that next Mersenne number with no known factors at all, after that only M1619. M1277 I guess that it may have a much larger enough prime factor, as it is closer to that prime number: M1279. Similarly as it was that case for M521, which is prime, M523 has a prime factor, that splits up into p69.p90 Do you know about any place, at that point, where this new result, along with that sigma value, curve counts, computational efforts, etc. have been posted up within any paper at some given conference/journal? Where was that announced? How -> through private mail, or personally, to whom? Then, who was that person inserting up with that result into that factor database? Last fiddled with by Raman on 2010-11-02 at 15:34
 2016-01-18, 20:32 #4 lavalamp     Oct 2007 Manchester, UK 101001111112 Posts Forgive me for resurrecting an old thread, but I am curious to know if the factorisation of M1237 / p70 (a c303) with SNFS is now within the realm of possibility for a dedicated amateur (or possibly as a group project)? I have some experience of factoring numbers in the low 200's of digits, but I don't know how the amount of memory required increases for much larger numbers. Alternatively there is the slightly easier number of M1213 / (327511 * p63), which is a c297.
2016-01-18, 23:51   #5
fivemack
(loop (#_fork))

Feb 2006
Cambridge, England

11000111011112 Posts

Quote:
 Originally Posted by lavalamp Forgive me for resurrecting an old thread, but I am curious to know if the factorisation of M1237 / p70 (a c303) with SNFS is now within the realm of possibility for a dedicated amateur (or possibly as a group project)? I have some experience of factoring numbers in the low 200's of digits, but I don't know how the amount of memory required increases for much larger numbers. Alternatively there is the slightly easier number of M1213 / (327511 * p63), which is a c297.
Not really practical unless the amateur is spectacularly dedicated - to the point of being willing to spend the price of a large house on the project. As you know, the small factors are immaterial for SNFS; 1237-bit would require sievers capable of handling a wider range and larger large primes than the ones we have; I don't know whether the CADO group have improved their siever in that direction, you have to be a bit careful in design to make sure that the wider-range siever doesn't use impractical amounts of memory.

Kleinjung / Bos / Lenstra did 2^1199-1 with the final step involving a 270M matrix which took 170 days on a substantial (couple of millions of dollars worth of nodes) cluster at EPFL, and which would take decades on the fastest equipment I have access to.

Last fiddled with by fivemack on 2016-01-18 at 23:51

2016-01-19, 00:33   #6
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

22×3×5×79 Posts

Quote:
 Originally Posted by lavalamp Forgive me for resurrecting an old thread, but I am curious to know if the factorisation of M1237 / p70 (a c303) with SNFS is now within the realm of possibility for a dedicated amateur (or possibly as a group project)? I have some experience of factoring numbers in the low 200's of digits, but I don't know how the amount of memory required increases for much larger numbers. Alternatively there is the slightly easier number of M1213 / (327511 * p63), which is a c297.
Check out the thread on the forum-group-factorization of M991. Figure a doubling of computrons per 30-bit increase in input size.
Matrix-solving memory requirements roughly increase with the square of dimension, while siever memory requirement increases much more slowly (something on the order of a doubling every 150 add'l bits, assuming the CADO siever can do 2^18 by 2^17 sieve region). Something like 4-5GB per thread might be sufficient to sieve M1200-M1300.

You might also want to see what NFS@home (or the M1061 thread here) has for stats on M1061; 5 years later, that size of project might be possible for a forum group. M1200+ is just nuts.

2016-01-19, 02:53   #7
lavalamp

Oct 2007
Manchester, UK

17·79 Posts

Quote:
 Originally Posted by fivemack As you know, the small factors are immaterial for SNFS
Aha, I was not aware of this. I had thought that the known factors could reduce the difficulty of running SNFS, while still allowing one to take advantage of the special form of the number.

Purely as a thought experiment then, which would actually be easier, running SNFS on M1237 or running GNFS on the remaining c303 after dividing out the known p70? Would it be GNFS? I think I vaguely remember reading that SNFS can factor numbers ~50 digits larger than GNFS for roughly the same amount of work/time/other meaningful unit of measure.

(I realise that running either on these numbers is not practical without an NSA sized budget, and perhaps not even then.)

I will read more on the factoring efforts on M991 and M1061.

2016-01-19, 04:01   #8
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

22·3·5·79 Posts

Quote:
 Originally Posted by VBCurtis Matrix-solving memory requirements roughly increase with the square of dimension, while siever memory requirement increases much more slowly (something on the order of a doubling every 150 add'l bits, assuming the CADO siever can do 2^18 by 2^17 sieve region). Something like 4-5GB per thread might be sufficient to sieve M1200-M1300.
Correction: CADO siever memory requirement increases with sieve area, not one dimension of sieve area. So, using 2^18 by 2^17 would require 16x the memory of 16e, or roughly 16GB per client. Luckily, the CADO client is multi-threaded, so one 4-threaded client could run on that 16GB.

2016-01-19, 04:13   #9
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

22·3·5·79 Posts

Quote:
 Originally Posted by lavalamp Purely as a thought experiment then, which would actually be easier, running SNFS on M1237 or running GNFS on the remaining c303 after dividing out the known p70? Would it be GNFS? I think I vaguely remember reading that SNFS can factor numbers ~50 digits larger than GNFS for roughly the same amount of work/time/other meaningful unit of measure.
The conversion-of-difficulty of SNFS is 0.56 * SNFS difficulty + 30 = rough GNFS difficulty. M1237 should be around GNFS239, or just over twice as hard as RSA768.

2016-01-19, 18:27   #10
henryzz
Just call me Henry

"David"
Sep 2007
Cambridge (GMT/BST)

10110111000112 Posts

Quote:
 Originally Posted by VBCurtis Correction: CADO siever memory requirement increases with sieve area, not one dimension of sieve area. So, using 2^18 by 2^17 would require 16x the memory of 16e, or roughly 16GB per client. Luckily, the CADO client is multi-threaded, so one 4-threaded client could run on that 16GB.
Has any in depth comparison of the CADO siever been done? Is it just N times slower and as such we use the ggnfs siever? Could it be speeded up by incorporating some of the speedups in the ggnfs siever?
As far as I can see in the code there doesn't seem to a limit to the sieve region.
16GB is becoming much more possible as far a memory usage for all cores is concerned. 32GB is only around £100 now and 16GB is only £50 if we can make it squeeze in there.

I wonder when it will be time for nfs@home to add the CADO siever for larger jobs.

 2016-01-19, 23:09 #11 VBCurtis     "Curtis" Feb 2005 Riverside, CA 10010100001002 Posts Tests on CADO by me (and fivemack, I believe) indicate the siever is 15-30% slower than GGNFS when running the same parameters. So, for big NFS@home projects, one may need to only make up that ~25% efficiency via larger large-prime bounds or 17e-equivalent sieve area to make CADO more efficient. I expect GNFS-220 might be big enough for CADO to be faster than ggnfs. I discovered the "params" folder in CADO last night, which has some suggested settings for a range of number sizes. Sometime soon I'll see about running I = 17, or 3 35-bit large primes, or both as compared to ggnfs. Perusal of the RSA-768 run they did showed they used I = 16, but that was likely a memory restriction as their sieve nodes either had 1GB or 2GB. They also used 40-bit large primes (!!!). The notes say "parameters were optimized for 37LP, but we accepted up to 40 bit large primes." 64 billion raw relations later.... Last fiddled with by VBCurtis on 2016-01-19 at 23:11

 Similar Threads Thread Thread Starter Forum Replies Last Post sweety439 Factoring 9 2016-12-21 21:22 akruppa Factoring 103 2010-11-27 20:51 WVU Mersenneer Factoring 8 2010-04-24 17:01 roger Factoring 3 2007-05-09 22:51 AntonVrba Factoring 7 2005-12-06 22:02

All times are UTC. The time now is 18:37.

Wed Apr 21 18:37:49 UTC 2021 up 13 days, 13:18, 0 users, load averages: 1.14, 1.54, 1.79