20210211, 18:41  #1 
Aug 2020
5*10398e4;3*2539e3
3×131 Posts 
Proth primes with k = 1281979
Because I was interested how sieving worked and also wanted to test Proth20 (proth testing for GPU), I started looking at 1281979 * 2^n +1.
That k is large enough not to get into PG's various Proth subprojects. Also it's prime, which I find interesting since it means there can be a lot of small factors in 1281979 * 2^n + 1 so sieving will be quite efficient. Of course it also means the probability for smaller primes is low, so it evens out. And it's my birthday. ;) Sieving Range: 100 000 < n < 4 100 000 Factors: p < 290e12 I sieved with sr1sieve on a Pentium G870 which took about 6 months. I stopped when finding a factor took about 2 hours. 66272 candidates remained. I know now I should have sieved to much higher n, but I didn't know that when I began. Primality testing Software is LLR2, n < 1 200 000 was tested on the Pentium G870. Now I switched to a Ryzen 9 3900X. For n = 1 200 000 a test takes 460 s, for n = 2 300 000 it takes 1710 s, showing nicely how well the approximation "double the digits, quadruple the time" can be applied. At least as long as FFT and L3 sizes match. Results Code:
For n < 1 000 000 k = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6430 1942 7438 2246 10474 3159 11542 3481 45022 13559 46802 14095 70382 21194 74938 22565 367310 110578 485014 146010 Last fiddled with by bur on 20210211 at 18:46 
20210212, 03:49  #2 
Romulan Interpreter
Jun 2011
Thailand
9753_{10} Posts 
Good job. Some random advice: that range is way too large. Split the range is 510 smaller ranges for sieving. You will be able to sieve higher and faster for each range, albeit the rate of eliminating the candidates per total range will be a bit slower, but you will compensate by having different sieve limits for each range, in such a way to equilibrate the LLR duration. Right now, your LLR may take (in a good computer) like 20 minutes per candidate for the start of the range, and 8 hours for the end of the range, so how do you calculate how much to sieve? Alternative, you may want to stop the LLR every few days/weeks and do some more sieving for a day or so, for the remaining candidates, to be sure that you always chose the task that eliminates the numbers faster. Write down how long it takes to eliminate candidates by sieving, and do a LLR test for a candidate which is down the list, about 510%. For example, the list is 3 thousand candidates, do a test for a candidate in position 200300 in the list. Write down the time. Eliminate that candidate from the list once the LLR is done. Continue sieving on the list until you take the same time to find new factors. Then LLR the first 500600 candidates. Repeat.
There is nothing like "i should have sieve it higher", stop the LLR, take a text editor, get rid of the tested (LLRed) candidates at the beginning of the list, then sieve the remaining list as high as you want. In fact, is recommended to do this periodically, as your LLR time gets higher per candidate, in a certain point you will eliminate them faster by continuing the sieving process. You don't need to start the sieve from scratch, you can continue any time from where it left, to higher primes. Make a backup in case you screw up the editing, if you didn't do that before. The size of the sieving ranges needs to be not too large, and not too small. If too large, you will lose a lot of wallclock time either by sieving too high (when you could possibly eliminate smaller candidates faster using LLR, therefore increasing the speed of the future sieving sessions) or sieving too low (when it would have been faster to eliminate larger candidates by more sieving). If too small, you will lose a lot of wallclock time with the overhead of switching from sieving to LLR, sieve initialization, manual work, etc. Choosing the right range size is a matter of experience, system speed, number of cores, etc. That is why crus, for example, works in smaller ranges, 100k, 200k, etc. Last fiddled with by LaurV on 20210212 at 03:56 
20210212, 06:54  #3  
Aug 2020
5*10398e4;3*2539e3
189_{16} Posts 
LaurV, thanks for the advice.
I read here and at PG that time per factor found increases only with sqrt(n), so I was explicitly told not to split the sieve. At the beginning I did just that because I thought it was the way to distribute sieving between several cores. Quote:
Does you method take that into account? Sorry if this is a stupid question, but I'm really not sure. And various people said large sieving files are good (put simply). 

20210212, 07:30  #4 
"Curtis"
Feb 2005
Riverside, CA
4,951 Posts 
Bur, I think you've got the right idea make one sieve that goes up to an exponent big enough you're not sure you'll finish. If you're confident you'll get to 6M or 8M, I agree your sieve maybe could have included more n. On the bright side, you'll fully understand how big an exponent to sieve to if you make another sieve!
If the sieve program you're using scales like the srsieve family (it likely does, since you refer to the same sqrtnrange scaling), the plan is to sieve until the candidateelimination rate is about double the time to test the smallest candidate. Then break off a chunk of candidates to LLR, and keep sieving the bigger ones. That gives wellsieved lists for LLR, and speeds the sieve up since the new sieve file has a smaller nrange (missing the smallest ones, I mean). I'm not sure why LaurV suggests splitting the range that gives up a bunch of efficiency. 
20210213, 07:33  #6 
Aug 2020
5*10398e4;3*2539e3
3·131 Posts 
VBCurtis, yes, I'm using sr1sieve which was the fastest among the various srsieves for this task. Good advice to feed candidates from the sieve to LLR testing. I guess I would remove all those candidates that are faster to LLR than to sieve?
And a general question, sometimes the LLR task will have to be interrupted (updates, etc). Is there a better way to continue afterwards than manually checking lresults for progress and removing those numbers from the LLR input file? Can it somehow check automatically which numbers are already tested in lresults? karbon, of course you're right. Fortunately, I only missed it when copying from my excel sheet and not a genuine miss. Still a bit embarassing... ;) Since I can't edit the original post anymore, here's the updated version of the table: Results Code:
For n < 1 000 000 n = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6430 1942 7438 2246 7894 2383 10474 3159 11542 3481 45022 13559 46802 14095 70382 21194 74938 22565 367310 110578 485014 146010 
20210213, 07:57  #7 
Romulan Interpreter
Jun 2011
Thailand
3·3,251 Posts 
Nope. I mean, yes, if you think about the same sieving depth. But you will sieve different chunks to different depths, and that is where you GET speed. Say you work a single k, and you sieve very large large n to N, with some depth p to P (srXsieve notation). You sieve 100k primes per second and eliminate 5 candidates per hour, but this is misleading because you also eliminate large candidates, for which the LLR test would take hours, but also eliminate small candidates, which would be eliminated faster by LL test. That is why you have to test how the things are for your system. I will try to give a numerical example soon, let me make one first.

20210213, 17:00  #8  
"Curtis"
Feb 2005
Riverside, CA
4,951 Posts 
Quote:
LLR maintains an .ini file that includes the line number it has tested. Don't edit the input file when restarting it will always pick up where it left off. LaurV seems to be comparing his idea to a plan that never breaks off small pieces when they're "ready" for LLR. I am confident that removing small candidates from the sieve when appropriate is much much faster than LaurV's splitting by Nrange. He and I rarely disagree, and he will surely relish showing me I'm mistaken! 

20210218, 06:57  #9 
Aug 2020
5*10398e4;3*2539e3
393_{10} Posts 
I think the breakingoffornot depends on the timescale. Splitting the range will speed up the sieving at the moment. However, in the long run overall sieving time would be shorter when sieving longer ranges. At least that's what I think.
Anyway, one core freed up and I ran a sr1sieve for 2.7e6 < n < 4.1e6 and 314e12 < P < 325e12. Estimation was 24 factors in about 3 days or about 11000 s per candidate removed. To confirm, I ran it for 7 hours, about 10%, and found 4 factors, i.e. 6300 s per candidate removed. LLR2 takes 2500 s for n = 2.7e6, so I stopped the sieve and continued LLR. It seems for this nrange sieving doesn't make sense anymore. Of course I could extend the sieve, but for now I want to finish n < 4.1e6 and then see how and if I'll continue. Last fiddled with by bur on 20210218 at 06:59 
20210309, 14:24  #10 
Aug 2020
5*10398e4;3*2539e3
3·131 Posts 
All n < 2,400,000 tested
No new primes to report. Is anyone interested in the lresults file with the residues?
Code:
For n < 2,400,000 n = digits 2 7 (not a Proth) 142 49 202 67 242 79 370 118 578 180 614 191 754 233 6,430 1,942 7,438 2,246 7,894 2,383 10,474 3,159 11,542 3,481 45,022 13,559 46,802 14,095 70,382 21,194 74,938 22,565 367,310 110,578 485,014 146,010 Largest n in progress = 3.20e6 # of digits = 963,000 (rank: ~ 1000) FFT size = 320k Avg. time = 3140 s Smallest n in progress = 2.45e6 # of digits = 737,500 (rank: ~ 1690) FFT size = 256k Avg. time = 1935 s In about 15 days Mega Prime territory will be reached with n = 3.33e6. In about 30 days all n < 3.2e6 will have been tested. Btw, since the office got a bit warm I have the CPU running with PPT = 128 W instead of 148 W. This decreased the package temperature from 74 °C to 6870 °C while increasing computation times only by 12 %. That's not a bad tradeoff, I think. 
20210410, 06:52  #11 
Aug 2020
5*10398e4;3*2539e3
3·131 Posts 
No new primes for n < 3,200,000
So I at least updated the stats: Largest n in progress = 3,575,000 # of digits = 1,075,000 (rank: ~ 405) FFT size = 386k Avg. time = 4230 s Smallest n in progress = 3,200,000 # of digits = 963,500 (rank: ~ 1025) FFT size = 336k Avg. time = 3565 s Around May, 1st all n < 3,600,000 will have been tested. Sometime in August or September the sieve will be exhausted at n = 4,100,000 which makes it about a year since I started ... and I hope at least one more prime will turn up. :) 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Proth primes  ET_  Proth Prime Search  9  20201002 07:11 
Proth 2.0  pepi37  Software  10  20200906 17:26 
Proth and Riesel Primes  lukerichards  Number Theory Discussion Group  7  20180120 16:47 
(NEW) Proth Primes Section  kar_bon  Riesel Prime Data Collecting (k*2^n1)  6  20101125 13:39 
some primes I found with Proth.exe last year  ixfd64  Lounge  1  20050907 23:42 