Proth primes with k = 1281979
Because I was interested how sieving worked and also wanted to test Proth20 (proth testing for GPU), I started looking at 1281979 * 2^n +1.
That k is large enough not to get into PG's various Proth subprojects. Also it's prime, which I find interesting since it means there can be a lot of small factors in 1281979 * 2^n + 1 so sieving will be quite efficient. Of course it also means the probability for smaller primes is low, so it evens out. [SIZE=1]And it's my birthday. ;)[/SIZE] [B]Sieving[/B] Range: 100 000 < n < 4 100 000 Factors: p < 290e12 I sieved with sr1sieve on a Pentium G870 which took about 6 months. I stopped when finding a factor took about 2 hours. 66272 candidates remained. I know now I should have sieved to much higher n, but I didn't know that when I began. [B]Primality testing[/B] Software is LLR2, n < 1 200 000 was tested on the Pentium G870. Now I switched to a Ryzen 9 3900X. For n = 1 200 000 a test takes 460 s, for n = 2 300 000 it takes 1710 s, showing nicely how well the approximation "double the digits, quadruple the time" can be applied. At least as long as FFT and L3 sizes match. [B]Results[/B] [CODE]For n < 1 000 000 k = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6430 1942 7438 2246 10474 3159 11542 3481 45022 13559 46802 14095 70382 21194 74938 22565 367310 110578 485014 146010[/CODE] Next milestone is n = 1 560 000 at which point any prime would be large enough to enter Caldwell's Top 5000. 
Good job. Some random advice: that range is way too large. Split the range is 510 smaller ranges for sieving. You will be able to sieve higher and faster for each range, albeit the rate of eliminating the candidates per total range will be a bit slower, but you will compensate by having different sieve limits for each range, in such a way to equilibrate the LLR duration. Right now, your LLR may take (in a good computer) like 20 minutes per candidate for the start of the range, and 8 hours for the end of the range, so how do you calculate how much to sieve? Alternative, you may want to stop the LLR every few days/weeks and do some more sieving for a day or so, for the remaining candidates, to be sure that you always chose the task that eliminates the numbers faster. Write down how long it takes to eliminate candidates by sieving, and do a LLR test for a candidate which is down the list, about 510%. For example, the list is 3 thousand candidates, do a test for a candidate in position 200300 in the list. Write down the time. Eliminate that candidate from the list once the LLR is done. Continue sieving on the list until you take the same time to find new factors. Then LLR the first 500600 candidates. Repeat.
There is nothing like "i should have sieve it higher", stop the LLR, take a text editor, get rid of the tested (LLRed) candidates at the beginning of the list, then sieve the remaining list as high as you want. In fact, is recommended to do this periodically, as your LLR time gets higher per candidate, in a certain point you will eliminate them faster by continuing the sieving process. You don't need to start the sieve from scratch, you can continue any time from where it left, to higher primes. Make a backup in case you screw up the editing, if you didn't do that before. The size of the sieving ranges needs to be not too large, and not too small. If too large, you will lose a lot of wallclock time either by sieving too high (when you could possibly eliminate smaller candidates faster using LLR, therefore increasing the speed of the future sieving sessions) or sieving too low (when it would have been faster to eliminate larger candidates by more sieving). If too small, you will lose a lot of wallclock time with the overhead of switching from sieving to LLR, sieve initialization, manual work, etc. Choosing the right range size is a matter of experience, system speed, number of cores, etc. That is why crus, for example, works in smaller ranges, 100k, 200k, etc. 
LaurV, thanks for the advice.
I read here and at PG that time per factor found increases only with sqrt(n), so I was explicitly told not to split the sieve. At the beginning I did just that because I thought it was the way to distribute sieving between several cores. [QUOTE]There is nothing like "i should have sieve it higher"[/QUOTE]By that I meant having candidates up to n = 16 400 000 in the sieve, not larger p. It would have taken only twice as long  at least to my understanding. Does you method take that into account? Sorry if this is a stupid question, but I'm really not sure. And various people said large sieving files are good (put simply). 
Bur, I think you've got the right idea make one sieve that goes up to an exponent big enough you're not sure you'll finish. If you're confident you'll get to 6M or 8M, I agree your sieve maybe could have included more n. On the bright side, you'll fully understand how big an exponent to sieve to if you make another sieve!
If the sieve program you're using scales like the srsieve family (it likely does, since you refer to the same sqrtnrange scaling), the plan is to sieve until the candidateelimination rate is about double the time to test the smallest candidate. Then break off a chunk of candidates to LLR, and keep sieving the bigger ones. That gives wellsieved lists for LLR, and speeds the sieve up since the new sieve file has a smaller nrange (missing the smallest ones, I mean). I'm not sure why LaurV suggests splitting the range that gives up a bunch of efficiency. 
[QUOTE=bur;571344]
[CODE] For n < 1 000 000 k = (...) [/QUOTE] 1. You meant "n=" not "k=" 2. You missed: 1281979*2^7894+1 is prime (2383 decimal digits), check your results please 3. Data inserted in the [url='https://www.rieselprime.de/ziki/Proth_prime_1281979']Wiki[/url] 
[B]VBCurtis[/B], yes, I'm using sr1sieve which was the fastest among the various srsieves for this task. Good advice to feed candidates from the sieve to LLR testing. I guess I would remove all those candidates that are faster to LLR than to sieve?
And a general question, sometimes the LLR task will have to be interrupted (updates, etc). Is there a better way to continue afterwards than manually checking lresults for progress and removing those numbers from the LLR input file? Can it somehow check automatically which numbers are already tested in lresults? [B]karbon[/B], of course you're right. Fortunately, I only missed it when copying from my excel sheet and not a genuine miss. Still a bit embarassing... ;) Since I can't edit the original post anymore, here's the updated version of the table: [B]Results[/B] [CODE]For n < 1 000 000 n = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6430 1942 7438 2246 7894 2383 10474 3159 11542 3481 45022 13559 46802 14095 70382 21194 74938 22565 367310 110578 485014 146010[/CODE] 
[QUOTE=VBCurtis;571397] I'm not sure why LaurV suggests splitting the range that gives up a bunch of efficiency.[/QUOTE]
Nope. I mean, yes, if you think about the same sieving depth. But you will sieve different chunks to different depths, and that is where you GET speed. Say you work a single k, and you sieve very large large n to N, with some depth p to P (srXsieve notation). You sieve 100k primes per second and eliminate 5 candidates per hour, but this is misleading because you also eliminate large candidates, for which the LLR test would take hours, but also eliminate small candidates, which would be eliminated faster by LL test. That is why you have to test how the things are for your system. I will try to give a numerical example soon, let me make one first. 
[QUOTE=bur;571499][B]VBCurtis[/B], yes, I'm using sr1sieve which was the fastest among the various srsieves for this task. Good advice to feed candidates from the sieve to LLR testing. I guess I would remove all those candidates that are faster to LLR than to sieve?
And a general question, sometimes the LLR task will have to be interrupted (updates, etc). Is there a better way to continue afterwards than manually checking lresults for progress and removing those numbers from the LLR input file? Can it somehow check automatically which numbers are already tested in lresults? [/QUOTE] You don't pick up very much sieve speed by removing candidates break off a chunk that tests in a reasonable time, something convenient for your manual labor. That is, you might sieve 100k to 4M, and then break off 100150k for LLR while 1504M keeps sieving. By the time you're up to 500k, you might break off 100k blocks instead it doesn't matter a whole lot. LLR maintains an .ini file that includes the line number it has tested. Don't edit the input file when restarting it will always pick up where it left off. LaurV seems to be comparing his idea to a plan that never breaks off small pieces when they're "ready" for LLR. I am confident that removing small candidates from the sieve when appropriate is much much faster than LaurV's splitting by Nrange. He and I rarely disagree, and he will surely relish showing me I'm mistaken! 
I think the breakingoffornot depends on the timescale. Splitting the range will speed up the sieving at the moment. However, in the long run overall sieving time would be shorter when sieving longer ranges. At least that's what I think.
Anyway, one core freed up and I ran a sr1sieve for 2.7e6 < n < 4.1e6 and 314e12 < P < 325e12. Estimation was 24 factors in about 3 days or about 11000 s per candidate removed. To confirm, I ran it for 7 hours, about 10%, and found 4 factors, i.e. 6300 s per candidate removed. LLR2 takes 2500 s for n = 2.7e6, so I stopped the sieve and continued LLR. It seems for this nrange sieving doesn't make sense anymore. Of course I could extend the sieve, but for now I want to finish n < 4.1e6 and then see how and if I'll continue. 
All n < 2,400,000 tested
No new primes to report. Is anyone interested in the lresults file with the residues?
[CODE]For n < 2,400,000 n = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6,430 1,942 7,438 2,246 7,894 2,383 10,474 3,159 11,542 3,481 45,022 13,559 46,802 14,095 70,382 21,194 74,938 22,565 367,310 110,578 485,014 146,010[/CODE] Due to the lack of new primes, here are some statistics: Largest n in progress = 3.20e6 # of digits = 963,000 (rank: ~ 1000) FFT size = 320k Avg. time = 3140 s Smallest n in progress = 2.45e6 # of digits = 737,500 (rank: ~ 1690) FFT size = 256k Avg. time = 1935 s In about 15 days Mega Prime territory will be reached with n = 3.33e6. In about 30 days all n < 3.2e6 will have been tested. Btw, since the office got a bit warm I have the CPU running with PPT = 128 W instead of 148 W. This decreased the package temperature from 74 °C to 6870 °C while increasing computation times only by 12 %. That's not a bad tradeoff, I think. 
No new primes for n < 3,200,000
So I at least updated the stats: Largest n in progress = 3,575,000 # of digits = 1,075,000 (rank: ~ 405) FFT size = 386k Avg. time = 4230 s Smallest n in progress = 3,200,000 # of digits = 963,500 (rank: ~ 1025) FFT size = 336k Avg. time = 3565 s Around May, 1st all n < 3,600,000 will have been tested. Sometime in August or September the sieve will be exhausted at n = 4,100,000 which makes it about a year since I started ... and I hope at least one more prime will turn up. :) 
No new primes for n < 3,600,000
Largest n in progress = 3,875,000 # of digits = 1,166,500 (rank: ~ 325) FFT size = 400k Avg. time = 4970 s Smallest n in progress = 3,600,000 # of digits = 1,085,000 (rank: ~ 395) FFT size = k Avg. time =  s Around July all sieved candidates of n < 4,100,000 should have been tested. I'm not so optimistic anymore that a prime will turn up... 
Things progressed much faster than I expected, the last batch 4,095e6 < n < 4,100e6 will be assigned to a core today and everything should be finished until June, 10th.
If there will be no prime in these last few numbers, I'll continue by sieving the proth side to higher n and doing LLR onthefly depending on sieve/LLR times as LaurV suggested. According to prime95 I'll be able to continue singlethreaded until an FFT size of 640k is reached which will be around n = 5.6e6, which agrees nicely with the 5.3 MB of L3 each core has. I just hope a prime will turn up before testing becomes really slow. I'm set on finding a mega prime for this k though. :D 
All n < 4,100,000 have been tested, no new primes other than the ones already posted.
Currently I'm sieving in the range of 4,100,000 < n < 10,000,000. Smallest pP range in progress: 87E12107E12 Avg. time per factor: 810 s Largest pP range in progress: 310E12330E12 Avg. time per factor: 3160 s Remaining candidates: 100,936 / 5,900,000 (1.71%) Est. time per LLR test of smallest candidate: 5600 s (n = 4,100,014) 
Quick update, no new primes...
All n < 4,300,000 have been tested. n < 10,000,000 sieved up to 570E12 (10,000 s / factor) Approximately 90,000 candidates remain. Largest LLRtest currently running: n = 4,450,000 FFT = 448k 6200 s / test Caldwell entry rank: 290 If the LLRruntime would increase strictly quadratic, then it'll take to around n = 5,600,000 until an LLR test takes again as long as sieving to eliminate a candidate. As a very rough estimate that should take about 7 months on the 10 core. And when will the next prime appear? Judging from Primegrid's results on other prime k values it can be a looong time. 
To keep up with the roughly monthly updates and since it's been one year now since I started searching this k, here are the current stats:
As always, no new prime. All n < 4,600,000 have been tested. Largest LLRtest currently running: n = 4,790,000 FFT = 512k 7500 s / test Caldwell entry rank: 274 
Once again, a status update with no new prime:
[B]Sieving[/B] All n < 10,000,000 have been sieved to 800e12. Last reported time per factor: approx. 12,000 s 76,167 candidates (1.6%) are left in the range 5,250,000 < n < 10,000,000. [B]LLR[/B] All n < 5,250,000 have been tested. Smallest LLRtest currently running: n = 5.25M FFT = 560k duration = 9975 s / test digits = 1.58M Caldwell entry rank: 254 Largest LLRtest currently running: n = 5.36e6 FFT = 560k duration = 10100 s / test digits = 1.61M Caldwell entry rank: 252 
If you have a GPU, use srsieve2cl for sieving. Depending upon the GPU, it might be faster than sr1sieve. I have a midrange GPU and it is 5x faster than sr1sieve on the same CPU.

[QUOTE=rogue;593620]If you have a GPU, use srsieve2cl for sieving. Depending upon the GPU, it might be faster than sr1sieve. I have a midrange GPU and it is 5x faster than sr1sieve on the same CPU.[/QUOTE]
Where can srsieve2cl be obtained? 
[QUOTE=dannyridel;593736]Where can srsieve2cl be obtained?[/QUOTE]
It is part of mtsieve and can be found here: [url]https://sourceforge.net/projects/mtsieve/[/url] 
[QUOTE=rogue;593758]It is part of mtsieve and can be found here: [url]https://sourceforge.net/projects/mtsieve/[/url][/QUOTE]
Great! Thanks. 
[QUOTE=rogue;593620]If you have a GPU, use srsieve2cl for sieving. Depending upon the GPU, it might be faster than sr1sieve. I have a midrange GPU and it is 5x faster than sr1sieve on the same CPU.[/QUOTE]
Not to hijack this thread, but based on personal experience, I've had trouble getting my GeForce RTX 2060 to sieve faster with srsieve2cl than my Intel Core i710875H running 8 threads with sr1sieve. In fact, it's not even close with my Riesel data (at least 2to1 IIRC). 
Thanks, I had thought about using the GPU since at Primegrid sieving Proth numbers with GPU is much faster than any CPU. The reason I didn't was that I only have a GTX 760 and a GTX 1660. The 760 is quite slow and not useful. The 1660 might be ok, but I prefer to use it for Wieferich/WallSunSun search currently and as Happy said, it'd need to be really fast to compete against the 12 cores of the Ryzen 9 3900X.
Btw, hijack all you want, I'm glad to hear about such things. 
[QUOTE=bur;594352]Thanks, I had thought about using the GPU since at Primegrid sieving Proth numbers with GPU is much faster than any CPU. The reason I didn't was that I only have a GTX 760 and a GTX 1660. The 760 is quite slow and not useful. The 1660 might be ok, but I prefer to use it for Wieferich/WallSunSun search currently and as Happy said, it'd need to be really fast to compete against the 12 cores of the Ryzen 9 3900X.
Btw, hijack all you want, I'm glad to hear about such things.[/QUOTE] I agree, just using 6 cores on my 3800xt provides around 45 mp /s but using G 2  g40 (which seems to be the limit of improving efficiency for me) on a gtx 1650 with ddr6 only gives me around 1 mp/s. 
While we're on the topic of sieving on GPU, did anyone try colab sessions for it? I don't have any experience with it, just began copy&pasting GPU72 code which seems to run fine. Is there a similar "fire&forget" available for srsieve2cl?

All times are UTC. The time now is 06:33. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.