mersenneforum.org  

Go Back   mersenneforum.org > New To GIMPS? Start Here! > Information & Answers

Reply
 
Thread Tools
Old 2021-09-26, 14:39   #56
Zhangrc
 
"University student"
May 2021
Beijing, China

127 Posts
Default

You can try NFS M1217/141065889786325378120502378437521368981420168194011324498831469066398800377270303571726687433864860259570077205499951967 which is much smaller than M1061.

Last fiddled with by Zhangrc on 2021-09-26 at 14:43
Zhangrc is offline   Reply With Quote
Old 2021-09-26, 14:56   #57
charybdis
 
charybdis's Avatar
 
Apr 2020

547 Posts
Default

The difficulty of SNFS is determined by the size of the original number, not the composite cofactor, since it's the original number that determines the polynomial. M1217 has 367 digits, so it has SNFS difficulty 367, not 248 (the size of the cofactor). This would be a record size for an SNFS job: I believe the current record is M1193 which had difficulty 360.

(edit: M1217 might be faster by GNFS on the 248-digit cofactor; the record is 250 digits so this is technically within reach if you have thousands of cores, but still far more difficult than M1061, which is comparable to 218 or 219 digit GNFS.)

Last fiddled with by charybdis on 2021-09-26 at 15:02
charybdis is offline   Reply With Quote
Old 2021-09-26, 16:00   #58
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

11×17×59 Posts
Default

Quote:
Originally Posted by Zhangrc View Post
You can try NFS M1217/141065889786325378120502378437521368981420168194011324498831469066398800377270303571726687433864860259570077205499951967 which is much smaller than M1061.
I am trying very, very hard not to go full-on RDS at this point.
xilman is offline   Reply With Quote
Old 2021-09-26, 16:24   #59
PhilF
 
PhilF's Avatar
 
"6800 descendent"
Feb 2005
Colorado

2·11·31 Posts
Default

Quote:
Originally Posted by xilman View Post
I am trying very, very hard not to go full-on RDS at this point.
Thank you for your discretion.
PhilF is offline   Reply With Quote
Old 2021-11-03, 22:47   #60
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

17·23 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
The standard suggestion for ECM depth is 0.21 or 0.20 * digits (the multiplier shrinks a little as the input grows, 0.22 is used for small jobs); that's 77 to 81 in this case. I'd call half a t80 good. From currently reported ECM depth to t80 is something close to a 20% chance of a factor.

As for SNFS, if by "computer" you mean "single motherboard", we agree. If you mean "computation device", a cluster of the sort frmky and the CADO group use is just fine for solving the matrix, and sieving can be spread over thousands of volunteer cores NFS@home-style. The matrix needs a grant or a sympathetic supercomputer-facility-gatekeeper.
I was curious how VBCurtis calculated the odds of factor between t65 and t80 so I tried with both the Dickman function and Merten's theorem.

Using Merten's the calculation seems to simplify to 1-65/80 = 18.75%
Using the Dickman function (Code) and asking what's the odds the 2nd largest factor is between x^.169 and x^.20811 (and that factor is not < x^.169) I also get 18.5%

Looking at M1481 where the t65 could conceivable be increased to t70.
Merten's suggests a 7.1% chance of finding a factor. Dickman suggests a greater 10.1% (as you exit the linear phase and start approaching sqrt(N))

Looking at M1217 where the t65 could also be increased to t70 (assuming Ryan hasn't already)
Merten's suggests a 7.1% chance of finding a factor. Dickman suggests 18.2%!

t70 would be roughly 0.4 gpu years of work (and $90 of power) or $500/factor
t75 would be roughly 2.1 gpu years of work (and $450 of power) or $1400/factor with 34.8% of finding a factor.
t75 would would be a fair bit past optimal (0.28 * digits) but more reasonable for a dedicated individual than factoring a C248.
SethTro is online now   Reply With Quote
Old 2021-11-03, 23:11   #61
charybdis
 
charybdis's Avatar
 
Apr 2020

547 Posts
Default

Quote:
Originally Posted by SethTro View Post
t70 would be roughly 0.4 gpu years of work (and $90 of power) or $500/factor
t75 would be roughly 2.1 gpu years of work (and $450 of power) or $1400/factor with 34.8% of finding a factor.
t75 would would be a fair bit past optimal (0.28 * digits) but more reasonable for a dedicated individual than factoring a C248.
Is this taking stage 2 into account? Unless you've figured out how to run stage 2 on a GPU, there's still a lot of CPU effort involved in running a t70 or t75.

Also 0.28 is very low. Using a more normal figure of 0.31 would suggest running t77 on a c248. With GPU stage 1 the optimal amount of ECM may be even higher. [this number is likely a close call between GNFS and SNFS but that doesn't affect the amount of ECM it needs]
charybdis is offline   Reply With Quote
Old 2021-11-03, 23:34   #62
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

17·23 Posts
Default

Quote:
Originally Posted by charybdis View Post
Is this taking stage 2 into account? Unless you've figured out how to run stage 2 on a GPU, there's still a lot of CPU effort involved in running a t70 or t75.

Also 0.28 is very low. Using a more normal figure of 0.31 would suggest running t77 on a c248. With GPU stage 1 the optimal amount of ECM may be even higher. [this number is likely a close call between GNFS and SNFS but that doesn't affect the amount of ECM it needs]
Thanks for the 0.31 number. VBCurtis's number 0.21 was for SNFS. I searched for the GNFS number but couldn't find it. Can you point me at a link for analysis of the ecm cutoff?

I didn't take CPU cost into account for $ (it roughly doubles the cost) but I did take it into account for time. I used some numbers from a previous project.

A single test curve at B2=105e12 took 1807 seconds while the B1 curves are estimated at ~116 seconds/curve.
1807/116 = 15.6 this would be fine if I had ~24 cores (my 12x cores giving ~9x single core perf).

Instead I'd use B2=105e11 which takes 663 seconds and compensate by doing ~20% extra curves.
This would give me a more comfortable 6/1 CPU:GPU ratio which I could satisfy with 8-10 cores.

The hard part is that my system is sluggish/non-responsive when I'm running ecm-gpu I think that if I add 10% wait time that goes away (meaning I can still use my computer as my primary driver).
SethTro is online now   Reply With Quote
Old 2021-11-04, 00:29   #63
charybdis
 
charybdis's Avatar
 
Apr 2020

547 Posts
Default

Quote:
Originally Posted by SethTro View Post
Thanks for the 0.31 number. VBCurtis's number 0.21 was for SNFS. I searched for the GNFS number but couldn't find it. Can you point me at a link for analysis of the ecm cutoff?
I've never seen an original source for this value, only mentions of it and similar constants scattered around the forum. One of the old hands would likely have a better idea.

At the t75 level the high RAM usage may be an obstacle to running lots of stage 2 threads on a single machine. You can use the -maxmem flag to limit it, but stage 2 will take longer as a result.
charybdis is offline   Reply With Quote
Old 2021-11-04, 02:50   #64
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

17·23 Posts
Default

That's a great point about --maxmem.

I updated my optimizer. New hypothetical numbers for a t70 on the C248 remainder of M1217:

147K curves at B1=3e9 B2=6.4e12
Stage 1 (on GPU) takes 120 seconds/curve and Stage 2 (on CPU) takes 285 seconds/curve
Peak memory was reported as 3244MB
The GPU would complete a batch every 2.5 days which is a little more than I'd like.
204 GPU-days + 484 CPU-core-days

111K curves at B1=4.3e9 B2=12.9e12
GPU 165s / curve, CPU 370 s / curve
Peak memory was reported as 3249MB
217 GPU-days + 475 CPU-core-days

260K curves at B1=1.2e9 B2=22.4e12
GPU 48s / curve, CPU 474s /curve
Peak memory was reported as 6596MB (so I might run into trouble with running more than a few instances)
GPU batches would complete every day which is nice.
144 days GPU-days + 2100 CPU-core-days

The 3rd cost slightly more (10 CPU-core-days = 1 GPU day) but finishes faster. Sadly the long batches from the first option mean I probably can't actually do this (I may try one batch just to validate numbers).
SethTro is online now   Reply With Quote
Old 2021-11-04, 04:38   #65
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×19×89 Posts
Default

Quote:
Originally Posted by SethTro View Post
I was curious how VBCurtis calculated the odds of factor between t65 and t80 so I tried with both the Dickman function and Merten's theorem.

Using Merten's the calculation seems to simplify to 1-65/80 = 18.75%
Using the Dickman function (Code) and asking what's the odds the 2nd largest factor is between x^.169 and x^.20811 (and that factor is not < x^.169) I also get 18.5%
I used the rather more naive "odds of an n-bit factor are around 1/n" and summed 1/66 through 1/80. That's in the vicinity of 20%, rounded down because actual factors-found rates are less than 1/n. It's not a particularly accurate estimate- I appreciate your work to get 18.5%.

The 0.31 and 0.21 numbers for GNFS and SNFS are empirical results to the query "how much ECM should I do such that the time per expected factor of the last block of ECM is equal to the time to run NFS?" When I started paying attention in 2010ish, 1/3 and 2/9 were widely regarded as reasonable values with an observation that as jobs get bigger the ratios should drop a little. Then NFS kept getting faster, so the correct ratios dropped- particularly GNFS, as poly select has taken 20-30% off of sieve time compared to ten years ago.

GPU-ecm shifts the ratio back upward a bit, since it speeds stage 1 so much.

The bayesian ECM tool is a more-accurate way to determine "optimal" ECM, but that tool also makes assumptions about how long NFS takes. Lucky for us, it barely matters in efficiency terms whether we do 2/3 of optimal ECM or 1.5x optimal ECM... so lots of us still use the ratios as "good enough".

Last fiddled with by VBCurtis on 2021-11-04 at 04:40
VBCurtis is online now   Reply With Quote
Old 2021-11-04, 05:29   #66
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

18716 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
I used the rather more naive "odds of an n-bit factor are around 1/n" and summed 1/66 through 1/80.
I think I'm double dipping (counting both endpoints) it should probably be 1-66/80 = 17.5% or 1-65/79 = 17.72% or possible 1-65.5/79.5 = 17.61% but these are all very close.

I believe your sum is the number of expected factors (e.g. you expect to find 0.206 more factors by completing a t80), 0.175 for the first factor + 0.175^2 for finding two factors + ... this makes a little less sense when we know the number is a product of exactly 2, 3, or 4 primes but meh.

Last fiddled with by SethTro on 2021-11-04 at 05:31
SethTro is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Do you know this method to factorize? Godzilla Miscellaneous Math 28 2017-10-31 18:14
mathematica7.0 can easily factorize 10^67+1111 aaa120 Factoring 14 2008-12-07 13:14

All times are UTC. The time now is 22:32.


Tue Dec 7 22:32:33 UTC 2021 up 137 days, 17:01, 0 users, load averages: 1.47, 1.49, 1.49

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.