![]() |
P-1 found a factor in stage #2, B1=620000, B2=11470000.
UID: Jwb52z/Clay, M72661607 has a factor: 721194472254253579910017 (P-1, B1=620000, B2=11470000) The web site that allowed you to submit results and to check the bit size of your factor no longer accepts results submitted below "1000M", so I don't know how to find the bit depth for this factor now. |
[QUOTE=Jwb52z;408239]P-1 found a factor in stage #2, B1=620000, B2=11470000.
UID: Jwb52z/Clay, M72661607 has a factor: 721194472254253579910017 (P-1, B1=620000, B2=11470000) The web site that allowed you to submit results and to check the bit size of your factor no longer accepts results submitted below "1000M", so I don't know how to find the bit depth for this factor now.[/QUOTE] log (721194472254253579910017) \ log (2) = 79.254734521384333395701284634627 |
[QUOTE=Jwb52z;408239]The web site that allowed you to submit results and to check the bit size of your factor no longer accepts results submitted below "1000M", so I don't know how to find the bit depth for this factor now.[/QUOTE]
The entry for on the lower right side of [URL="http://www.mersenne.ca/"]Mersenne.ca[/URL] will do it for you. |
P-1 found a factor in stage #2, B1=670000, B2=12730000.
UID: Jwb52z/Clay, M77284997 has a factor: 307550488804619397198217 (P-1, B1=670000, B2=12730000). 78.025 bits. |
P-1 found a factor in stage #1, B1=620000.
UID: Jwb52z/Clay, M72684683 has a factor: 52202616324930097922768863 (P-1, B1=620000). 85.432 bits. |
P-1 found a factor in stage #2, B1=620000, B2=11470000.
UID: Jwb52z/Clay, M72685439 has a factor: 3091868382160271484282143 (P-1, B1=620000, B2=11470000). 81.355 bits. |
[Thu Sep 17 12:41:45 2015]
ECM found a factor in curve #1, stage #2 Sigma=4339232396038175, B1=50000, B2=5000000. UID: nitro/haswell, M11030779 has a factor: 2158232198221851211938767 (ECM curve 1, B1=50000, B2=5000000), AID: DC4E685AD94DB28E1277F967E6BA**** According to mersenne.ca it is 80.8 bits |
Another < 1M down:
M984047 has a factor: 3108608385818956152983 71.4 bits. Found by ECM, and for good reason: k=1579501988126053 = 907 × 1741457539279. Would cost a couple thousand GHz-days to get it through P-1. |
P-1 found a factor in stage #1, B1=625000.
UID: Jwb52z/Clay, M73438811 has a factor: 126111349467688907859623017 (P-1, B1=625000) 86.705 bits. |
M 11148593 has a [URL="http://www.mersenne.ca/exponent/11148593"]77 bit factor[/URL]
|
M 11198497 has a 74.4 bit factor
[Fri Oct 02 22:13:53 2015] ECM found a factor in curve #1, stage #2 Sigma=4359337085573483, B1=50000, B2=5000000. UID: nitro/haswell, M11198497 has a factor: 26388627524888833477391 (ECM curve 1, B1=50000, B2=5000000), AID: 64D2E3B7E51C9F533A7FC96A5DEB**** |
M 85009 has a 109.9 bit factor
ECM found a factor in curve #45, stage #1 Sigma=5722888173047275, B1=1000000, B2=1000000. UID: nitro/haswell, M85009 has a factor: 1216257991344271446457970390032199 (ECM curve 45, B1=1000000, B2=1000000) |
wow, a 110 bit factor? gratz
|
[QUOTE=Gordon;411901]M 85009 has a 109.9 bit factor
[/QUOTE] Good shot! I´m also doing < 100 K exponents, but have been really unlucky as of late. Just out of curiosity, how long does one curve take for assignments like the one above (B1=1e6, exponent ~ 85000)? I´m doing 28K exponents with B1=3e6, which amounts to roughly the same amount of work per curve. It takes ~11min on an ageing i5-750 OCed to 3.2 GHz. |
[QUOTE=lycorn;411928]Good shot! I´m also doing < 100 K exponents, but have been really unlucky as of late.
Just out of curiosity, how long does one curve take for assignments like the one above (B1=1e6, exponent ~ 85000)? I´m doing 28K exponents with B1=3e6, which amounts to roughly the same amount of work per curve. It takes ~11min on an ageing i5-750 OCed to 3.2 GHz.[/QUOTE] Haswell, 3.4GHz, no overclock. Does stage 1 in about 5 minutes. I run a batch of 300 curves (stage 1 only) then copy that output file into a sub-folder. Start GMP-ECM running on that for the stage 2 Run another batch of 300 stage 1 curves, repeat process, and again. Even running 3 copies of GMP-ECM at the same time, cpu utilisation stays below 40%. Memory usage is only about 3gb per instance and with 32 I have plenty to play with. |
[QUOTE=Gordon;411964]Haswell, 3.4GHz, no overclock. Does stage 1 in about 5 minutes.
I run a batch of 300 curves (stage 1 only) then copy that output file into a sub-folder. Start GMP-ECM running on that for the stage 2 Run another batch of 300 stage 1 curves, repeat process, and again. Even running 3 copies of GMP-ECM at the same time, cpu utilisation stays below 40%. Memory usage is only about 3gb per instance and with 32 I have plenty to play with.[/QUOTE] What B2 are you using? Your factor-announcement post had B2 = B1, which makes no sense. |
[QUOTE=VBCurtis;411968]What B2 are you using? Your factor-announcement post had B2 = B1, which makes no sense.[/QUOTE]
That was correct, it was found by Prime95 in the advanced,ecm mode. I am using P95 only for stage 1 using gmpecmhook. The resulting residues get passed to gmp-ecm...sometimes you get lucky. Sometimes you don't - I am working through the exponents between 100k & 1 million taking them all from 62 to 64 bits, so far not a single factor in nearly 4000 tests. |
M558521 down
It is not very easy to find factors bellow 1M, but possible. Recently I got lucky with this one:
M558521: [URL]http://www.mersenne.ca/exponent/558521[/URL]. 112 bits, ECM work. [Sat Oct 3 19:54:32 2015] ECM found a factor in curve #95, stage #2 Sigma=7665492458698752, B1=250000, B2=25000000. UID: BloodIce/Mjolnir4, M558521 has a factor: 6217626694395075626755074825906289 (ECM curve 95, B1=250000, B2=25000000) |
[QUOTE=Gordon;411969]That was correct, it was found by Prime95 in the advanced,ecm mode. I am using P95 only for stage 1 using gmpecmhook. The resulting residues get passed to gmp-ecm...sometimes you get lucky.
Sometimes you don't - I am working through the exponents between 100k & 1 million taking them all from 62 to 64 bits, so far not a single factor in nearly 4000 tests.[/QUOTE] OK, so the factor was found in stage 1. When GMP-ECM does run stage 2, what B2 are you setting for the B1 = 1M runs? |
[QUOTE=VBCurtis;411996]OK, so the factor was found in stage 1. When GMP-ECM does run stage 2, what B2 are you setting for the B1 = 1M runs?[/QUOTE]
If I let gmp-ecm pick it does about 975m, or I'll set it manually to 1b |
I´ve never used GMP-ECM on exponents that large. What is the max amount of mem it uses during Stage 2, for the bounds you mentioned on your post?
Never mind, I had missed one previous post of yours, where you refer the amount of mem used... |
M85027 falls...
another sub 100k exponent succumbs to ecm - ran 425 curves in total
Resuming ECM residue saved with Prime95 Input number is 4758184975...7139017727 (25596 digits) Using B1=1000000, B2=1000000000, polynomial Dickson(6), sigma=0:3673943552503015 Step 1 took 3847484ms Step 2 took 1217010ms ********** Factor found in step 2: 113574028377227867558212550573836752813871 Found prime factor of 42 digits: 113574028377227867558212550573836752813871 136.3 bits. |
Using the instructions given at [url]http://www.mersenneforum.org/showpost.php?p=207360&postcount=69[/url], I found that the group order of the lucky elliptic curve is:
2[SUP]2[/SUP] * 3[SUP]3[/SUP] * 5 * 19 * 3457 * 17027 * 33769 * 107273 * 311407 * 685859 * 243064937 So you would not have been able to find that 42-digit prime factor with the bound B2 = 100 B1 generally used on Prime95. You can also see that the group order is multiple of 12, as expected. |
[QUOTE=Gordon;412763]another sub 100k exponent succumbs to ecm - ran 425 curves in total
Resuming ECM residue saved with Prime95 Input number is 4758184975...7139017727 (25596 digits) Using B1=1000000, B2=1000000000, polynomial Dickson(6), sigma=0:3673943552503015 [B]Step 1 took 3847484ms[/B] Step 2 took 1217010ms ********** Factor found in step 2: 113574028377227867558212550573836752813871 Found prime factor of 42 digits: 113574028377227867558212550573836752813871 136.3 bits.[/QUOTE] How come Step 1 took the time listed above? If you´ve used Prime95 for Stage 1, GMP-ECM should have taken virtually no time on Stage 1. It seems you are duplicating work running Stage 1 twice, and you can avoid it: Use the following command (adapting it to your particular Setup): ecm -v -resume xxxxxxx.txt 1e6-1e6 1e9. The key point is the 1e6-1e6 syntax, that instructs GMP-ECM not to run Stage 1. And by the way, congrats on that very nice find... you´re making me envious :)) |
3847484 ms is little more than 1 hour, So almost 19 CPU days were lost in this way while computing the 425 curves.
|
[QUOTE=lycorn;412801]
Use the following command (adapting it to your particular Setup): ecm -v -resume xxxxxxx.txt 1e6-1e6 1e9. [B]The key point is[/B] the 1e6-1e6 syntax, that instructs GMP-ECM not to run Stage 1. [/QUOTE] ..and that's the bit I missed. :redface: I just stopped and restarted one instance of gmp-ecm and restarted with the correct parameters... Step 1 took 0ms Will cut the run time of each curve by 75%, you live and learn :tu: |
[QUOTE=alpertron;412809]3847484 ms is little more than 1 hour, So almost 19 CPU days were lost in this way while computing the 425 curves.[/QUOTE]
It's not as bad as it could have been though, I am running 4 instances of gmp-ecm concurrently...so the waste of wall-time wasn't TOO bad :no: |
[QUOTE=Gordon;411964]Haswell, 3.4GHz, no overclock. Does stage 1 in about 5 minutes.
I run a batch of 300 curves (stage 1 only) then copy that output file into a sub-folder. Start GMP-ECM running on that for the stage 2 Run another batch of 300 stage 1 curves, repeat process, and again. Even running 3 copies of GMP-ECM at the same time, cpu utilisation stays below 40%. Memory usage is only about 3gb per instance and with 32 I have plenty to play with.[/QUOTE] Just to update. Takes 201 seconds to run stage 1 on m85121. I run 4 instances of prime95 at once to generate batches of results. Then 4 instances of gmp-ecm to do stage 2. They only take just under 4gb each. |
So the Stage 1 timing is much better than quoted from your previous post. 201 s is well under 5 minutes.
I must get a Haswell... definitely. Or a Skylake. On another note, I found funny that the CPU utilization with 3 instances running stayed below 40%. If you have a quad core, each instance of GMP-ECM should take ~ 25% (1 core). |
You might want to experiment with the B2 values a bit, you are now spending 6x longer in stage 2 than in stage 1 (stg1 201 sec vs. stg2 1271 sec). The 'rule of thumb' is spending the same time in stage 2 as in stage 1.
|
[QUOTE=lycorn;412884]So the Stage 1 timing is much better than quoted from your previous post. 201 s is well under 5 minutes.
I must get a Haswell... definitely. Or a Skylake. On another note, I found funny that the CPU utilization with 3 instances running stayed below 40%. If you have a quad core, each instance of GMP-ECM should take ~ 25% (1 core).[/QUOTE] 4 instances is currently holding at ~50% |
[QUOTE=Gordon;412888]4 instances is currently holding at ~50%[/QUOTE]
I haven't tried GMP-ECM on numbers this big, but in my experience ECM does benefit from hyperthreading; that is, you might try running 2-3 stage1's on prime95 with 5-6 stage 2's on ECM. |
[QUOTE=VBCurtis;412889]I haven't tried GMP-ECM on numbers this big, but in my experience ECM does benefit from hyperthreading; that is, you might try running 2-3 stage1's on prime95 with 5-6 stage 2's on ECM.[/QUOTE]
Windows task manager shows the load spread fairly evenly across all "8" cores, am generating some more stage 1 results and will try running 2 more instances of gmp-ecm. Before I had pointed out to me the full correct syntax for the parameters (and so redoing stage 1 again) I was also using prime95 to do ecm work via primenet, if you will remember... M85027 - Step 2 took 1306804ms just running the 4 x gmp-ecm M85121 - Step 2 took 829410ms |
Nice!
This is how we all learn :wink: |
[QUOTE=Gordon;412896]Windows task manager shows the load spread fairly evenly across all "8" cores [/QUOTE]
OK, that explains the 50% load with 4 instances. I always forget HT, since I never had such thing on my machines. |
2 Attachment(s)
[QUOTE=VBCurtis;412889]I haven't tried GMP-ECM on numbers this big, but in my experience ECM does benefit from hyperthreading; that is, you might try running 2-3 stage1's on prime95 with 5-6 stage 2's on ECM.[/QUOTE]
6 instances of gmp-ecm cpu - 78% ram - 23-26 gig see attached At ~15 minutes per curve, throughout is 24*4*6 = 496 curves/day |
[QUOTE=Gordon;412968]6 instances of gmp-ecm
cpu - 78% At ~15 minutes per curve, throughout is 24*4*6 = 496 curves/day[/QUOTE] If you run 2 prime95's for stage1 at the same time, you'll sustain both stages at 400-450 curves/day. Given your timings, looks like 1 Prime95 can do stage 1 4 times faster than GMP-ECM does stage 2. So, 2 prime95 + 6 GMP-ECM will only slowly build up extra stage1 residues. You could consider decreasing the B2 value that GMP-ECM uses to be roughly 3x the time of stage 1 so that 2 prime95's + 6 GMP-ECM's are in balance, or decrease it further such that stage 2 time is 5/3rds stage 1 time (and run 3x P95 + 5x GMP-ECM). I suggest some experimenting to find which B2 leads to the highest fraction of a t35 per day, rather than the highest number of curves. There's lots of knobs to tinker with! If you try this, be warned that GMP-ECM has only coarse control over B2; it might round your requests to 6G, or 9G, or 12G, etc. |
[QUOTE=VictordeHolland;412885]You might want to experiment with the B2 values a bit, you are now spending 6x longer in stage 2 than in stage 1 (stg1 201 sec vs. stg2 1271 sec). The 'rule of thumb' is spending the same time in stage 2 as in stage 1.[/QUOTE]
I've read on here that time spent in stage 2 should be about 0.7x time in stage 1 (RDS?), that would imply reducing B2 from it's current 1B to about 100M. I don't care about the memory usage - 6 ecm's at 1B is only 24gb of ram, I understand the lower bound means likely to find smaller factors, but they will churn through much quicker. A search on the internet doesn't seem to turn up much guidance other than rule of thumb B2=100*B1, anyone got any pointers to some more detailed analysis? |
[QUOTE=Gordon;413286]I've read on here that time spent in stage 2 should be about 0.7x time in stage 1 (RDS?), that would imply reducing B2 from it's current 1B to about 100M.
I don't care about the memory usage - 6 ecm's at 1B is only 24gb of ram, I understand the lower bound means likely to find smaller factors, but they will churn through much quicker. A search on the internet doesn't seem to turn up much guidance other than rule of thumb B2=100*B1, anyone got any pointers to some more detailed analysis?[/QUOTE] The 100*B1 rule of thumb is for the lower-memory, slower, non-GMP-ECM method for stage 2. That's why Prime95 still uses it; P95 uses that low-memory algorithm. The empirical solution is to use the -v flag with GMP-ECM, and try a variety of B2 values to see which value minimizes the expected time to complete the T-level you're interested in (I think t35, B1 = 1M?). The problem is that GMP-ECM has no way to know how long Stage 1 took, so that process doesn't work with your combination of programs. If I were you, I'd run GMP-ECM on a single curve for both stage 1 and stage 2, adjust for the time stage 1 takes on P95 (for M1277, it's about 10% faster at B1 = 11M), and see what B1 minimizes time to complete T35. I happen to enjoy doing such experiments, and I have sufficient memory to do so; if you like I'll run the tests for you and report my results and the process I used (for public-nitpicking purposes, 'cause "trust me" isn't good enough). Let me know what specific candidate you'd like me to test (the results should hold across a pretty wide range of inputs, because the options for B2 are rather coarse). |
[QUOTE=VBCurtis;413305]and I have sufficient memory to do so[/QUOTE]
[offtopic][thinking]I have a very good memory too, but I don't remember where I put it...[/thinking][/offtopic] |
[QUOTE=LaurV;413310][offtopic][thinking]I have a very good memory too, but I don't remember where I put it...[/thinking][/offtopic][/QUOTE]I have lots of very good memories also. I'm now waiting for Alzheimer's to set in so that I can recall them.
|
M40157713 has a factor: 281921279054741252950391
77.89 bits Trial factoring |
[QUOTE=UBR47K;413312]Trial factoring[/QUOTE]
:loco: hahaha, you went to 78 bits for a 40M? This is at least a gorgeous luck, or you invested a lot of resources into it... Anyhow, congrats! |
[QUOTE=VBCurtis;413305]The 100*B1 rule of thumb is for the lower-memory, slower, non-GMP-ECM method for stage 2. That's why Prime95 still uses it; P95 uses that low-memory algorithm.
The empirical solution is to use the -v flag with GMP-ECM, and try a variety of B2 values to see which value minimizes the expected time to complete the T-level you're interested in (I think t35, B1 = 1M?). The problem is that GMP-ECM has no way to know how long Stage 1 took, so that process doesn't work with your combination of programs. If I were you, I'd run GMP-ECM on a single curve for both stage 1 and stage 2, adjust for the time stage 1 takes on P95 (for M1277, it's about 10% faster at B1 = 11M), and see what B1 minimizes time to complete T35. I happen to enjoy doing such experiments, and I have sufficient memory to do so; if you like I'll run the tests for you and report my results and the process I used (for public-nitpicking purposes, 'cause "trust me" isn't good enough). Let me know what specific candidate you'd like me to test (the results should hold across a pretty wide range of inputs, because the options for B2 are rather coarse).[/QUOTE] Let's call Prime95 - P and gmp-ecm - E, T1=time in stage 1, T2= time in stage 2 Very rough quick test on M85333 Stage 1 ---------- P - 177 secs E - 2364 secs (not a typo, ran it again got 2381) so in stage 1 with an exponent in the 80k range P is 13x faster than E ! Stage 2 ---------- Only run with E, all cases B1=1M and only running stage 2 B2=10M - 352mb ram, 43 seconds B2=50M - 839mb ram, 114 seconds B2=100M - 1223mb ram, 184 seconds B2=1000M - 3992mb ram, avg over 50 runs is ~1300 seconds So if we aim for T2=~0.7*T1 then I would choose B1=1M, B2=120M Will do some more timings running multiple instance of each, the actual figures will vary as it becomes memory constrained, when I ran 6E with 2P cpu hovers around 80% and the 177 seconds becomes around 250 ps in case anyone is interested 2^83333-1 has two trivial factors 49233173057 & 20464531692207892554943 |
[QUOTE=Gordon;413369]Let's call Prime95 - P and gmp-ecm - E, T1=time in stage 1, T2= time in stage 2
Very rough quick test on M85333 Stage 1 ---------- P - 177 secs E - 2364 secs (not a typo, ran it again got 2381) so in stage 1 with an exponent in the 80k range P is 13x faster than E ! Stage 2 ---------- Only run with E, all cases B1=1M and only running stage 2 B2=10M - 352mb ram, 43 seconds B2=50M - 839mb ram, 114 seconds B2=100M - 1223mb ram, 184 seconds B2=1000M - 3992mb ram, avg over 50 runs is ~1300 seconds [/QUOTE] Did you notice that GMP-ECM chooses B2 that isn't quite what you ask it for? E processes Stage 2 in blocks, and a fraction of a block takes the same time as a full block, so the program rounds your requested B2 up to the next-biggest full block. On my setup, invoking E with 1e6 50e6 produces a B2 of 59.4M, and -v flag says 2198 curves are needed to complete a t35. E with 1e6 100e6 produces a B2 of 120M, 1790 curves for a t35. E with 1e6 1000e6 produces B2 of 1306M, 1030 curves for a t35. Using your timings, 177 + 114 = 291 sec/curve for 1e6/59e6. 291*2198 = 640ksec for a t35. 177 + 184 = 361 sec/curve for 1e6/120e6. 361*1790 = 646ksec for a t35. So, it appears between these two settings, 59e6 (your 50e6) is better. I would try a couple more B2 selections, perhaps 65M and 85M, to see if a setting in-between produces a lower expected time for a t35. For mersenne numbers specifically, the block size itself changes in a way I am not familiar with, so I don't know how granular the setting is; there may be a jump from 80M to 120M, for instance. |
[QUOTE=Gordon;413369]Stage 1
---------- P - 177 secs B2=10M - 352mb ram, 43 seconds B2=50M - 839mb ram, 114 seconds B2=100M - 1223mb ram, 184 seconds So if we aim for T2=~0.7*T1 then I would choose B1=1M, B2=120M [/QUOTE] You did the 0.7 in the wrong direction. T2 for 100M is already longer than T1, and your heuristic wants T2 shorter than T1. These B2s are near the normal choice from Prime95; what is stage 2 time for 100M from prime95? Perhaps these numbers are too big to bother with GMP-ECM at all. |
[QUOTE=VBCurtis;413410]Did you notice that GMP-ECM chooses B2 that isn't quite what you ask it for? E processes Stage 2 in blocks, and a fraction of a block takes the same time as a full block, so the program rounds your requested B2 up to the next-biggest full block.
On my setup, invoking E with 1e6 50e6 produces a B2 of 59.4M, and -v flag says 2198 curves are needed to complete a t35. E with 1e6 100e6 produces a B2 of 120M, 1790 curves for a t35. E with 1e6 1000e6 produces B2 of 1306M, 1030 curves for a t35. Using your timings, 177 + 114 = 291 sec/curve for 1e6/59e6. 291*2198 = 640ksec for a t35. 177 + 184 = 361 sec/curve for 1e6/120e6. 361*1790 = 646ksec for a t35. So, it appears between these two settings, 59e6 (your 50e6) is better. I would try a couple more B2 selections, perhaps 65M and 85M, to see if a setting in-between produces a lower expected time for a t35. For mersenne numbers specifically, the block size itself changes in a way I am not familiar with, so I don't know how granular the setting is; there may be a jump from 80M to 120M, for instance.[/QUOTE] Best explanation I have ever seen for how to do the calculations, you're a star :tu: I was looking at the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=85300&ecm_hi=85500&ecmnof_lo=1&ecmnof_hi=2500"]tables here[/URL] where it says for a T35 you need 1,580 curves but gives no clue as to bound 2....off to do some more testing |
On my laptop, Using P95 for both stages produces a time of 815sec for stage 1 and 298sec for stage 2 with bounds 1e6/100e6. If that ratio of T2 = 35% * T1 holds on your machine also, there is no reason to use GMP-ECM on this exponent; 100e6 on P95 happens faster than 59e6 on GMP-ECM.
Edit: Laptop is a Core M, so should have all the recent instructions, even if it's slow. |
[QUOTE=VBCurtis;413481]On my laptop, Using P95 for both stages produces a time of 815sec for stage 1 and 298sec for stage 2 with bounds 1e6/100e6. If that ratio of T2 = 35% * T1 holds on your machine also, there is no reason to use GMP-ECM on this exponent; 100e6 on P95 happens faster than 59e6 on GMP-ECM.
Edit: Laptop is a Core M, so should have all the recent instructions, even if it's slow.[/QUOTE] On 85333 I did some timings, with B1=1M I get the following B2=10M, P=7.865, E=41.98 B2=50M P=36.552, E=111.99 B2=100M, P=71.512, E=179.93 I calculated the total run times to complete a T35 in increments of 10M through to 100M. For E the sweet spot (minimising curves*stage1*stage2) is B2=40m where per curve time is 91.09s....still more than twice as slow as P at 50M So, the question is how many curves does P think it will take to complete a certain T at a certain boundary? If I run E with 1e6-1e6 1e8 it tells me that to complete T35 will take 1566 curves, at 1e9 it will tke 899 curves. Is there any way to get P to spit out the same information? Or if it is the exact same algorithm can we take the number of curves to be the same? A bit more experimenting, this time on M16553 (chosen at random), using the P-E combo with B1=1M the lowest time for a T35 is with B2=70M however with B1=3M the lowest time for a T35 is with B2=500M and the total time is less than the B1=1M... |
The number of curves for a T35 is a function of the B1 and B2 bounds, not of the software used to run the curve.
Your last couple lines support an observation I've made running my own tests over the years: Running one B1 level higher (Say, 3M instead of 1M) than the usual level for a T-level results in little added time to complete that level, but a substantial chance to find a factor bigger than the targeted T-level. I use 3M for T35, 8M for T40, 20M (or higher) for T45, etc. In your case, B1 = 3M appears to actually *save* time for a T35 compared to 1M, a nice discovery indeed. I believe mersenne.org converts whatever curves you submit into the equivalent number of curves at the indicated level on the site, but I'd like someone else to confirm this. If it's true, I will commence running M1277 with B1 = 18e8/default GMP-ECM B2, with each curve worth 10 curves at B1 = 8e8/B2 = 8e10. |
[QUOTE=VBCurtis;413505]The number of curves for a T35 is a function of the B1 and B2 bounds, not of the software used to run the curve.
Your last couple lines support an observation I've made running my own tests over the years: Running one B1 level higher (Say, 3M instead of 1M) than the usual level for a T-level results in little added time to complete that level, but a substantial chance to find a factor bigger than the targeted T-level. I use 3M for T35, 8M for T40, 20M (or higher) for T45, etc. In your case, B1 = 3M appears to actually *save* time for a T35 compared to 1M, a nice discovery indeed. I believe mersenne.org converts whatever curves you submit into the equivalent number of curves at the indicated level on the site, but I'd like someone else to confirm this. If it's true, I will commence running M1277 with B1 = 18e8/default GMP-ECM B2, with each curve worth 10 curves at B1 = 8e8/B2 = 8e10.[/QUOTE] right, one full T35 completed on M16553 Stage 1 - 5 instances of P, 533 curves @ 111 seconds, wall time 199 minutes Stage 2 - 5 instances of E, 533 curves @ 70 seconds, wall time 124 minutes Total time 5h 23m |
[QUOTE=Gordon;413646]right, one full T35 completed on M16553
Stage 1 - 5 instances of P, 533 curves @ 111 seconds, wall time 199 minutes Stage 2 - 5 instances of E, 533 curves @ 70 seconds, wall time 124 minutes Total time 5h 23m[/QUOTE] Sorry to quoye myself. forgot to put the bounds B1=3M, B2=500M and according to the web page that does the ecm reports, the B1=11M curve count increased...more on this in another post :bangheadonwall: |
[QUOTE=Gordon;413654]Sorry to quoye myself. forgot to put the bounds
B1=3M, B2=500M and according to the web page that does the ecm reports, the B1=11M curve count increased...more on this in another post :bangheadonwall:[/QUOTE] 533 curves at 3M has the same chance of finding a 45 digit factor as whatever number of 11M curves the database increased the count by. Your curves were less efficient per unit time at finding 45 idigit factors than B1 = 11M (or 20M, etc) would be, but they still had a chance, and that chance is reflected by the curve-count increase in the 11M column. You can calculate this yourself with the -v flag in GMP-ECM; see what 3M/500M says for number of curves to complete a T45, etc. |
Another little precious:
M190979 has a factor: 24771663822972061822220659457 / (ECM curve 65, B1=250000, B2=25000000) 94.323 bits k= 2[SUP]7[/SUP] × 14929 × 28477 × 1191803465693 |
[QUOTE=lycorn;414511]Another little precious:
M190979 has a factor: 24771663822972061822220659457 / (ECM curve 65, B1=250000, B2=25000000) 94.323 bits k= 2[SUP]7[/SUP] × 14929 × 28477 × 1191803465693[/QUOTE] Nice one! I've moved about 6500 exponents below 1M from 62->64 bits, not a single factor so far.... |
0-for-6500 tells you either that factoring level is already complete (via ECM to 20-digit level, perhaps?), or that your setup is faulty.
|
[QUOTE=Gordon;414539]Nice one! I've moved about 6500 exponents below 1M from 62->64 bits, not a single factor so far....[/QUOTE]
Which is expected, due to the qty of P-1 and ECM done on that range, the chance for a 20-25 digits factor is extremely small. |
[QUOTE=LaurV;414571]Which is expected, due to the qty of P-1 and ECM done on that range, the chance for a 20-25 digits factor is extremely small.[/QUOTE]
I don't really expect to find many (any) but it is a "levelling up" exercise to get everything up to a minimum of 64 bits. Sub 100k exponents might have to wait though, mfaktc can't go that low and the cpu is flat out on ecm in that region... |
[QUOTE=VBCurtis;414557]0-for-6500 tells you either that factoring level is already complete (via ECM to 20-digit level, perhaps?), or that your setup is faulty.[/QUOTE]
I suspect the former, the equipment is fine. |
I was on the same project before, bringing everything in that region up to 62 bits and i didn't find anything either. To some folks I was joking about going to 63 next, but I really didn't want to do that but now - immedeatly somebody else took the work for himself :hello:
|
Isn't it vastly more productive to take everything sub-1M to t25 via ECM? That should find ~all factors to 75 bits and quite a few in 75-80 range. ECM is the way forward for finding factors of small exponents, while TF is not only redundant but a dead end.
|
We know that, its just that we like to see the lowest TF number being 62 and not 61 respectively 64 and not 62. Not the best use of the hardware (which is the reason why I stopped) but fun to do anyway.
|
[QUOTE=VBCurtis;414660]Isn't it vastly more productive to take everything sub-1M to t25 via ECM? That should find ~all factors to 75 bits and quite a few in 75-80 range. ECM is the way forward for finding factors of small exponents, while TF is not only redundant but a dead end.[/QUOTE]
Never a dead end until you actually do it and get the answer...it's really just about neatness, so when you look at the [URL="http://www.mersenne.ca/status/tf/0/0/4/0"]progress chart[/URL] eventually everything will be at least 64 bits. As to the ecm, as you know I am working on that as well, taking everything to a T35 equivalent, which for exponents in the sub 20k bracket means 533 curves, B1=3M,B2=500M |
[QUOTE=VBCurtis;414660]Isn't it vastly more productive to take everything sub-1M to t25 via ECM? That should find ~all factors to 75 bits and quite a few in 75-80 range. ECM is the way forward for finding factors of small exponents, while TF is not only redundant but a dead end.[/QUOTE]
Ok, to return to my previous post before going on holiday about the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=16649&ecm_hi=16649&ecmnof_lo=1&ecmnof_hi=2500"]detailed ecm progress report[/URL] being mainly a work of fiction... Go to that report and notice the number of curves required for each level T25 -280 T30 - 640 T35 - 1580 T40 - 4700 T45 - 9700 T50 - 17100 now scroll down to the entry for M16649 and you will see the following T25 -Done T30 - Done T35 - Done T40 - Done T45 - 1864 T50 - So that means that this exponent must have had 9064 curves run right? We can [URL="http://www.mersenne.org/report_exponent/?exp_lo=16649&exp_hi=&full=1&ecmhist=1"]check that here[/URL] Oh dear, it's only has 915 curves in total as follows T25 - none T30 - none T35 - 564 T40 - 100 T45 - 151 T50 - 100 So whatever data the detailed ecm progress report uses for it's data it sure isn't coming from the results database and the information presented is to all intents and purposes useless. Why can't the report show the actual number of curves run at each level as I have summarised above, you can then use this to plan where to put the effort in. |
[QUOTE=Gordon;414715]Never a dead end until you actually do it and get the answer...it's really just about neatness, so when you look at the [URL="http://www.mersenne.ca/status/tf/0/0/4/0"]progress chart[/URL] eventually everything will be at least 64 bits.[/QUOTE]
Sure... Burning a few dinosaurs to make things look "neat" to the very few who look at the report makes a great deal of sense.... |
[QUOTE=chalsall;414719]Sure... Burning a few dinosaurs to make things look "neat" to the very few who look at the report makes a great deal of sense....[/QUOTE]
Yep, my money, my equipment :smile: |
[QUOTE=Gordon;414721]Yep, my money, my equipment :smile:[/QUOTE]
Yup. Rock your boat! :smile: |
[QUOTE=Gordon;414718]
now scroll down to the entry for M16649 and you will see the following T25 -Done T30 - Done T35 - Done T40 - Done T45 - 1864 T50 - So that means that this exponent must have had 9064 curves run right? Why can't the report show the actual number of curves run at each level as I have summarised above, you can then use this to plan where to put the effort in.[/QUOTE] 1. No, 9064 curves do not need to be run for this status to be correct. Those 100 curves at t50 level are equivalent to an entire t35 on their own, as one example. Have you noticed how the moment, say, a t35 is complete that there's hundreds of t40-level curves already listed? That's because the 1580 curves at 1M are *equivalent* to some number of curves at t40. The PM1 work that is done also counts as ECM work, which may explain the "missing" curves when taking into account equivalences. One curve at 44e6/6e9 is worth almost 4 default t45 curves. 2. The report does not show the actual number of curves run at each level because it would confuse the people who do not understand ECM well, while causing unneeded headaches for the people who do understand ECM. Even if the report is slightly optimistic, running curves at higher bounds rarely wastes much effort while running curves at too low a bound wastes quite a bit of time. |
[QUOTE=VBCurtis;414739]1. No, 9064 curves do not need to be run for this status to be correct. Those 100 curves at t50 level are equivalent to an entire t35 on their own, as one example. Have you noticed how the moment, say, a t35 is complete that there's hundreds of t40-level curves already listed? That's because the 1580 curves at 1M are *equivalent* to some number of curves at t40. The PM1 work that is done also counts as ECM work, which may explain the "missing" curves when taking into account equivalences. One curve at 44e6/6e9 is worth almost 4 default t45 curves.
2. The report does not show the actual number of curves run at each level because it would confuse the people who do not understand ECM well, while causing unneeded headaches for the people who do understand ECM. Even if the report is slightly optimistic, running curves at higher bounds rarely wastes much effort while running curves at too low a bound wastes quite a bit of time.[/QUOTE] So are you saying that the 915 curves actually run equate to completing fully T25, T30, T35, T40 and nearly 2000 curves at T50? Seriously? Just to hammer the point home about how what is shown makes no sense at all, let's revisit M1277, I ran another 50,000 curves with B1=50K, checked the result in and the T65 count went up by 3. What are the chances (actual %) of 50K T25 curves actually finding a 65 digit factor? If I was so inclined within a reasonably short period of time (few months) I could make M1277 indicate that T65 was "complete" and anyone looking at that report to pick exponents to factor would likely skip over it. I see no downside to my suggestion, if you fully understand it you can do the comparison in your head, for those that don't - or don't want to and just do the work - they would quickly be able to see what is needed. |
[QUOTE=Gordon;414804]So are you saying that the 915 curves actually run equate to completing fully T25, T30, T35, T40 and nearly 2000 curves at T50?[/quote]
The 2000 curves are listed against t45 level, not 50. Nonetheless, given the results data, it should be about 151+4*100 = 551 (+ few extra) curves at t45 level. I would think that this is due to missing results data rather than incorrect summation. Primenet sums up the ECM results submitted to show the various counts. I'd trust that statistics more than the results data, especially since there doesn't appear to be any ECM activity on these exponenets in the past 5 years (very suspicious). [QUOTE=Gordon;414804]Just to hammer the point home about how what is shown makes no sense at all, let's revisit M1277, I ran another 50,000 curves with B1=50K, checked the result in and the T65 count went up by 3. What are the chances (actual %) of 50K T25 curves actually finding a 65 digit factor? If I was so inclined within a reasonably short period of time (few months) I could make M1277 indicate that T65 was "complete" and anyone looking at that report to pick exponents to factor would likely skip over it.[/QUOTE] Running 3 curves at t65 is roughly the same effort as running 50k curves at t25. To finish up t65 that way would require you to run 6 billion curves at t25 level. Which has a pretty decent chance of actually finding a t65 if one exists. But it will not be nearly as efficient as directly running t65 level curves. I say go for it. |
[QUOTE=Gordon;414804]So are you saying that the 915 curves actually run equate to completing fully T25, T30, T35, T40 and nearly 2000 curves at T50?
Seriously? I see no downside to my suggestion, if you fully understand it you can do the comparison in your head, for those that don't - or don't want to and just do the work - they would quickly be able to see what is needed.[/QUOTE] 1. No, I didn't think we were talking about t50 level. Mersenne.org lists 1864 curves at t45 level at present. If we give about 30% credit for curves at 3M and 4x credit for 44M, I count roughly 750 curves equivalent. The PM1 work may be in the vicinity of 50 curves, or more; I am not sure of the conversion for mersenne special form (I think PM1 is more likely than a regular ECM curve to find a factor, due to form 2kp+1 of factors). That means roughly half the indicated work is accounted for in the detailed listing. Running curves at 3M, as it appears you did, is a waste of time compared to running them at 11M or larger. I've no idea what work is missing from the detailed report, and I do wonder how much work the admins think is missing from the detailed reports. 2. How would a listing of actual curves completed allow someone like you to see what is needed? Let's continue with M16649 as the example. With the curves listed on the detailed report now, ignoring the chart listing 1864 curves at t45 level right now, what is needed? Is a t40 complete? If not, how many more curves are needed? I agree that 50k curves at B1 = 50k is insignificant for a t65 (not nearly 3 curves at 800M). 230 billion (per GMP-ECM, I assume you're using that for M1277) such curves would be a t55, and still useless for t65. This does make me question the site's algorithm when upconverting small curves. |
[QUOTE=VBCurtis;414826]I agree that 50k curves at B1 = 50k is insignificant for a t65 (not nearly 3 curves at 800M). 230 billion (per GMP-ECM, I assume you're using that for M1277) such curves would be a t55, and still useless for t65. This does make me question the site's algorithm when upconverting small curves.[/QUOTE]
If I recall correctly, Mersenne.org uses a very primitive way to calculate the total ECM effort, by converting the ECM curves to GHzdays. So: 50,000 curves with B1=50,000 B2=5M => 0.85 GHzdays 3 curves with B1=800M B2=80G => 0.816 GHzdays |
[QUOTE=VictordeHolland;414829]If I recall correctly, Mersenne.org uses a very primitive way to calculate the total ECM effort, by converting the ECM curves to GHzdays.[/QUOTE]
Yes, it is primitive but has nothing to do with GHz-days. Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort. I'm no expert in the field, but I've been told this is good enough for a rough approximation of effort. The most likely cause for the history report not matching the actual total is someone reporting to me by email a substantial number of GMP-ECM results. I have two ways to add this to the database. 1) Convert it to a prime95 compatible format and use the manual web forms , or 2) use a SQL stored procedure that adds to the ECM effort. Method 1 creates a history entry, method 2 does not. |
[QUOTE=axn;414807]The 2000 curves are listed against t45 level, not 50. Nonetheless, given the results data, it should be about 151+4*100 = 551 (+ few extra) curves at t45 level. I would think that this is due to missing results data rather than incorrect summation. Primenet sums up the ECM results submitted to show the various counts. I'd trust that statistics more than the results data, especially since there doesn't appear to be any ECM activity on these exponenets in the past 5 years (very suspicious).
Running 3 curves at t65 is roughly the same effort as running 50k curves at t25. To finish up t65 that way would require you to run 6 billion curves at t25 level. Which has a pretty decent chance of actually finding a t65 if one exists. But it will not be nearly as efficient as directly running t65 level curves. I say go for it.[/QUOTE] Sorry my mistake I did of course mean T45 :blush: Still, given the results data visible in the database it just doesn't add up. If this report isn't pulling the data from the database and summing it, where is it getting it's information from ? Added ; typed the above as George was posting his reply... |
[QUOTE=Prime95;414836]Yes, it is primitive but has nothing to do with GHz-days.
Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort. I'm no expert in the field, but I've been told this is good enough for a rough approximation of effort. [/QUOTE] that doesn't sound quite right, lets look at my recent 16553 results B1=3M = x B2 = 500M = y (167xB1 not 100xB1) Curves=533 = z The figures for B1, B2 & curves were arrived at by testing as follows B2=50M - time a curve under ecm and note how many curves are required B2=100M - time a curve under ecm and note how many curves are required and so on Knowing how long P95 takes to do stage 1 with B1=3M you can calculate the total time required to complete say T35 Find the combination of B2 & curves that gives the lowest total time. For 16553 it goes like this B1=1M, lowest total time is when B2=70M needing 1780 curves, total time 20.47 hours. curves required provided by gmp-ecm B1=3M, lowest total time is when B2=500M needing 533 curves, total time 19.24 hours B1=11M, lowest total time is when B2=1B needing 231 curves, total time 24.87 hours Per the formula 1. Throw away Z - the actual number of curves - we don't use it !! 2. Force B2=100*B1 = 300M 3. Calculate Z1*x = 3M*300M = 900*M*M = 9000B 4. ??? |
[QUOTE=Gordon;414839]
Per the formula 1. Throw away Z - the actual number of curves - we don't use it !! 2. Force B2=100*B1 = 300M 3. Calculate Z1*x = 3M*300M = 900*M*M = 9000B 4. ???[/QUOTE] In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for. Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level. |
[QUOTE=VBCurtis;414847]In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for.
Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level.[/QUOTE] it may be that he's using the old british system of million milliard billion billiard .... etc. there's still a type though. |
[QUOTE=VBCurtis;414847]In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for.
Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level.[/QUOTE] Direct lift from George's post "Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort." I read this as Z1=100x=100*B1 If it isn't then it needs explained more fully how this mystical "conversion" is done. For my maths x=B1 so in my case 3M(illion) Z1=100*B1= 300M(illion) 3 million x 300 million = 900 million million which is 9000 BILLION The reason for the ???? at point 4 is - what does actually represent and how is it converted to curves, equivalent or otherwise? Without knowing how the calcs are done, can you look at the report for say M22787 and be certain that the T25,30,35,40 actually have been FULLY done and that the T45 is 1/6 the way there?? |
[QUOTE=Gordon;414852]
"Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort."[/QUOTE] Read this as: Using a formula from Alex Kruppa your B1=x B2=y curves=z, values are converted into an "equivalent number of curves (call this z1)" where B2=100x . Then z1 * x is added to the running total of ECM effort. |
[QUOTE=Prime95;414854]Read this as:
Using a formula from Alex Kruppa your B1=x B2=y curves=z, values are converted into an "equivalent number of curves (call this z1)" where B2=100x . Then z1 * x is added to the running total of ECM effort.[/QUOTE] Punctuation always helps :smile: What is the conversion formula? |
[CODE]// Our total_ECM_effort tracks curves assuming a B2 value of 100 * B1.
// If B2 is not 100 * B1, then adjust the reported B1 value up or down // to reflect the increased or decreased chance of finding a factor. // // From Alex Kruppa, master of all things ECM, the following formula // compensates for using B2 values that are not 100 * B1. // 0.11 + 0.89 * (log_10(B2 / B1) / 2) ^ 1.5 function normalized_B1( $B1, $B2 ) { if ($B2 == 100 * $B1) return $B1; return $B1 * (0.11 + 0.89 * pow (log10 ($B2 / $B1) * 0.5, 1.5)); } [/CODE] |
Meanwhile...
M191689 has a factor: Factor: 1319541091656106614381619344521 / (ECM curve 53, B1=250000, B2=25000000) 100.058 bits k = 2[SUP]2[/SUP] × 5 × 71 × 1439 × 3881 × 434013212304053 |
[QUOTE=Prime95;414836]Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort. I'm no expert in the field, but I've been told this is good enough for a rough approximation of effort.[/QUOTE]
This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves. |
[QUOTE=axn;414897]This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves.[/QUOTE]
[thinking]Woh! Let't try that! Time to get some free ECM credit, we began to fall behind...:w00t:[/thinking] |
[QUOTE=axn;414897]This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves.[/QUOTE]
Well, part of the problem seems to be that all t25 and t30 curves are counted this way for the t50 and up levels, resulting in a permanent overestimation of the actual work done on every candidate marked complete past t45 level. It's not a large error, less than a factor of two, but it's definitely an error. It just-so happens to be the sort of error that leads people to run curves slightly too big, which is possibly more efficient (e.g. once half a t35 is done, running curves at 3M is nearly as fast at finishing the t35 whilst finding some factors larger than curves at 1M would). I like accuracy, but this error is wasting very very little user-CPU time. |
P-1 found a factor in stage #2, B1=635000, B2=11906250.
UID: Jwb52z/Clay, M74230187 has a factor: 141895279886608660072079 (P-1, B1=635000, B2=11906250), 76.909 bits. |
Another tiddler falls to ecm - M32029
Really small one this time
ECM found a factor in curve #394, stage #2 Sigma=3746395036922586, B1=3000000, B2=400000000. UID: nitro/haswell, M32029 has a factor: 1113056753632810717273106103120689 (ECM curve 394, B1=3000000, B2=400000000) 109.778 bits... |
Well done, Gordon. I will soon return to those small ranges. And also resume running some GMP-ECM curves on M1277...
A couple of days ago an even smaller one was wiped out by Carsten Kossendey. Somewhere in the 14K range, if memory serves me. |
[QUOTE=lycorn;415749]Well done, Gordon. I will soon return to those small ranges. And also resume running some GMP-ECM curves on M1277...
A couple of days ago an even smaller one was wiped out by Carsten Kossendey. Somewhere in the 14K range, if memory serves me.[/QUOTE] Thanks, testing in this range is quite quick P95 does the stage one in just over 3 minutes (B1=3M) and GMP-ECM does the stage in just over 2 minutes (B2=400M) As for 1277, I've run 120,000 curves on it. The report says that T60 is complete but of course with the earlier discussion on here and looking at the actual results turned in I wonder if it is worth running a few hundred more curves with B1 at 11M & 44M...it won't take long |
[QUOTE=Gordon;415750]
As for 1277, I've run 120,000 curves on it. The report says that T60 is complete but of course with the earlier discussion on here and looking at the actual results turned in I wonder if it is worth running a few hundred more curves with B1 at 11M & 44M...it won't take long[/QUOTE] 120,000 curves sounds like a lot of effort, so I looked at the detailed status page. I was sad to learn they were totally useless, 70k curves at B1 = 50k and 50k curves at B1 = 1M. Congrats, you've done about 2t40, when something on the order of 600t40 are complete. Since you believe the summary report is useless, how about you look at the detailed report for 1277? Add up the number of curves submitted at B1 = 800M or higher. Give triple credit for B1 = 2.5G or higher. If that adds up to 7200 or higher, a t55 is complete just from automated submissions (never mind that most of us who ran curves on m1277 submitted them manually). There's a bunch of curves at 260M too, which are pretty likely to find a factor at 55-digit level.... not to mention the actual 110M t55 curves from the distant past. Suggesting that 11m or 44m curves are of use suggests you don't believe that bigger B1 bounds will find factors of the size you think might be out there. 1500 curves at B1 = 800M is a t50, whether the smaller curves were run or not. By the time 5000 curves are run at 800M, the chances of a missed factor at 50 digits is roughly 1 in e^3, if such a factor exists. Given the amount of manually-submitted curves George has received, and the attention this particular number has had over the years, you really should trust that multiple t55 have been completed. This is not to claim that the summary report is accurate, but even cutting the reported work in half leaves something on the order of 6t55 done. Cut it in half again, since you're paranoid, and 3t55 is a very very conservative estimate for the work completed. That's about a t50 20 times over, leaving a 1 in e^20 chance of a missed factor (if one exists) and your 44M curves utterly useless. So, don't wonder. |
I second VBCurtis. Go straight to B1=800M.
|
[Fri Nov 13 22:49:50 2015]
P-1 found a factor in stage #2, B1=635000, B2=12700000, E=6. UID: firejuggler, M73779743 has a factor: 42881501664078549487591 (P-1, B1=635000, B2=12700000, E=6), very smooth, k= 3* 5 * 139 * 163 * 223 * 661 * 5801 · in fact, I do not understand why it wasn't found in stage #1 |
P-1 found a factor in stage #2, B1=5000000, B2=100000000, E=12.
UID: Sergiosi, M1153223 has a factor: 4573105889235910801109267528278073569467723559031741597532613079 (P-1, B1=5000000, B2=100000000, E=12) composite factor (211.475 bits) with the prime factors 2284688470854204790322774231 (90.884 bits) 2001632147041083037742434439494795009 (120.591 bits) |
Marvelous find, Sergiosi! Even after the split you have a factor above 120 bits. Keep the factors coming.
|
[QUOTE=Sergiosi;416780]P-1 found a factor in stage #2...[/QUOTE]
B-e-a-utiful! Congrats! |
P-1 found a factor in stage #2, B1=645000, B2=12255000.
UID: Jwb52z/Clay, M75143573 has a factor: 2223935088334736540260164531842793066322489490183317406066993 (P-1, B1=645000, B2=12255000) It's a composite factor according to Prime95, but the results file doesn't save the 2 parts that this factor becomes, so I don't know how to post them to this message. |
[COLOR=#000000] 2223935088334736540260164531842793066322489490183317406066993=10616701978609272999314569 (84 bits)[/COLOR]*[COLOR=#000000]209475135763965305121935565579560297 (118 bits)
k-values: 2^2*3^2*7*1289*38069*5712719 2^2*7*23*113*409*5039*18713*577807*859513 [/COLOR] |
All times are UTC. The time now is 21:32. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.