mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Proth Prime Search

Reply
 
Thread Tools
Old 2021-02-11, 18:41   #1
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

3·7·19 Posts
Default Proth primes with k = 1281979

Because I was interested how sieving worked and also wanted to test Proth20 (proth testing for GPU), I started looking at 1281979 * 2^n +1.

That k is large enough not to get into PG's various Proth subprojects. Also it's prime, which I find interesting since it means there can be a lot of small factors in 1281979 * 2^n + 1 so sieving will be quite efficient. Of course it also means the probability for smaller primes is low, so it evens out. And it's my birthday. ;)

Sieving
Range: 100 000 < n < 4 100 000
Factors: p < 290e12

I sieved with sr1sieve on a Pentium G870 which took about 6 months. I stopped when finding a factor took about 2 hours. 66272 candidates remained.

I know now I should have sieved to much higher n, but I didn't know that when I began.

Primality testing
Software is LLR2, n < 1 200 000 was tested on the Pentium G870. Now I switched to a Ryzen 9 3900X. For n = 1 200 000 a test takes 460 s, for n = 2 300 000 it takes 1710 s, showing nicely how well the approximation "double the digits, quadruple the time" can be applied. At least as long as FFT and L3 sizes match.

Results
Code:
For n < 1 000 000

k =          digits

     2            7 (not a Proth)
   142           49
   202           67
   242           79
   370          118
   578          180
   614          191
   754          233
  6430         1942
  7438         2246
 10474         3159
 11542         3481
 45022        13559
 46802        14095
 70382        21194
 74938        22565
367310       110578
485014       146010
Next milestone is n = 1 560 000 at which point any prime would be large enough to enter Caldwell's Top 5000.

Last fiddled with by bur on 2021-02-11 at 18:46
bur is offline   Reply With Quote
Old 2021-02-12, 03:49   #2
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

100110001100002 Posts
Default

Good job. Some random advice: that range is way too large. Split the range is 5-10 smaller ranges for sieving. You will be able to sieve higher and faster for each range, albeit the rate of eliminating the candidates per total range will be a bit slower, but you will compensate by having different sieve limits for each range, in such a way to equilibrate the LLR duration. Right now, your LLR may take (in a good computer) like 20 minutes per candidate for the start of the range, and 8 hours for the end of the range, so how do you calculate how much to sieve? Alternative, you may want to stop the LLR every few days/weeks and do some more sieving for a day or so, for the remaining candidates, to be sure that you always chose the task that eliminates the numbers faster. Write down how long it takes to eliminate candidates by sieving, and do a LLR test for a candidate which is down the list, about 5-10%. For example, the list is 3 thousand candidates, do a test for a candidate in position 200-300 in the list. Write down the time. Eliminate that candidate from the list once the LLR is done. Continue sieving on the list until you take the same time to find new factors. Then LLR the first 500-600 candidates. Repeat.

There is nothing like "i should have sieve it higher", stop the LLR, take a text editor, get rid of the tested (LLR-ed) candidates at the beginning of the list, then sieve the remaining list as high as you want. In fact, is recommended to do this periodically, as your LLR time gets higher per candidate, in a certain point you will eliminate them faster by continuing the sieving process. You don't need to start the sieve from scratch, you can continue any time from where it left, to higher primes. Make a backup in case you screw up the editing, if you didn't do that before.

The size of the sieving ranges needs to be not too large, and not too small. If too large, you will lose a lot of wall-clock time either by sieving too high (when you could possibly eliminate smaller candidates faster using LLR, therefore increasing the speed of the future sieving sessions) or sieving too low (when it would have been faster to eliminate larger candidates by more sieving). If too small, you will lose a lot of wall-clock time with the overhead of switching from sieving to LLR, sieve initialization, manual work, etc. Choosing the right range size is a matter of experience, system speed, number of cores, etc. That is why crus, for example, works in smaller ranges, 100k, 200k, etc.

Last fiddled with by LaurV on 2021-02-12 at 03:56
LaurV is offline   Reply With Quote
Old 2021-02-12, 06:54   #3
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

1100011112 Posts
Default

LaurV, thanks for the advice.

I read here and at PG that time per factor found increases only with sqrt(n), so I was explicitly told not to split the sieve. At the beginning I did just that because I thought it was the way to distribute sieving between several cores.

Quote:
There is nothing like "i should have sieve it higher"
By that I meant having candidates up to n = 16 400 000 in the sieve, not larger p. It would have taken only twice as long - at least to my understanding.

Does you method take that into account? Sorry if this is a stupid question, but I'm really not sure. And various people said large sieving files are good (put simply).
bur is offline   Reply With Quote
Old 2021-02-12, 07:30   #4
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

499410 Posts
Default

Bur, I think you've got the right idea- make one sieve that goes up to an exponent big enough you're not sure you'll finish. If you're confident you'll get to 6M or 8M, I agree your sieve maybe could have included more n. On the bright side, you'll fully understand how big an exponent to sieve to if you make another sieve!

If the sieve program you're using scales like the srsieve family (it likely does, since you refer to the same sqrt-n-range scaling), the plan is to sieve until the candidate-elimination rate is about double the time to test the smallest candidate. Then break off a chunk of candidates to LLR, and keep sieving the bigger ones.

That gives well-sieved lists for LLR, and speeds the sieve up since the new sieve file has a smaller n-range (missing the smallest ones, I mean). I'm not sure why LaurV suggests splitting the range- that gives up a bunch of efficiency.
VBCurtis is offline   Reply With Quote
Old 2021-02-12, 10:40   #5
kar_bon
 
kar_bon's Avatar
 
Mar 2006
Germany

22·17·43 Posts
Default

Quote:
Originally Posted by bur View Post
[CODE]
For n < 1 000 000

k =
(...)
1. You meant "n=" not "k="
2. You missed: 1281979*2^7894+1 is prime (2383 decimal digits), check your results please
3. Data inserted in the Wiki

Last fiddled with by kar_bon on 2021-02-12 at 10:41
kar_bon is offline   Reply With Quote
Old 2021-02-13, 07:33   #6
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

3·7·19 Posts
Default

VBCurtis, yes, I'm using sr1sieve which was the fastest among the various srsieves for this task. Good advice to feed candidates from the sieve to LLR testing. I guess I would remove all those candidates that are faster to LLR than to sieve?

And a general question, sometimes the LLR task will have to be interrupted (updates, etc). Is there a better way to continue afterwards than manually checking lresults for progress and removing those numbers from the LLR input file? Can it somehow check automatically which numbers are already tested in lresults?


kar-bon, of course you're right. Fortunately, I only missed it when copying from my excel sheet and not a genuine miss. Still a bit embarassing... ;)

Since I can't edit the original post anymore, here's the updated version of the table:

Results
Code:
For n < 1 000 000

n =          digits

     2            7 (not a Proth)
   142           49
   202           67
   242           79
   370          118
   578          180
   614          191
   754          233
  6430         1942
  7438         2246
  7894         2383
 10474         3159
 11542         3481
 45022        13559
 46802        14095
 70382        21194
 74938        22565
367310       110578
485014       146010
bur is offline   Reply With Quote
Old 2021-02-13, 07:57   #7
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

230608 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
I'm not sure why LaurV suggests splitting the range- that gives up a bunch of efficiency.
Nope. I mean, yes, if you think about the same sieving depth. But you will sieve different chunks to different depths, and that is where you GET speed. Say you work a single k, and you sieve very large large n to N, with some depth p to P (srXsieve notation). You sieve 100k primes per second and eliminate 5 candidates per hour, but this is misleading because you also eliminate large candidates, for which the LLR test would take hours, but also eliminate small candidates, which would be eliminated faster by LL test. That is why you have to test how the things are for your system. I will try to give a numerical example soon, let me make one first.
LaurV is offline   Reply With Quote
Old 2021-02-13, 17:00   #8
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

10011100000102 Posts
Default

Quote:
Originally Posted by bur View Post
VBCurtis, yes, I'm using sr1sieve which was the fastest among the various srsieves for this task. Good advice to feed candidates from the sieve to LLR testing. I guess I would remove all those candidates that are faster to LLR than to sieve?

And a general question, sometimes the LLR task will have to be interrupted (updates, etc). Is there a better way to continue afterwards than manually checking lresults for progress and removing those numbers from the LLR input file? Can it somehow check automatically which numbers are already tested in lresults?
You don't pick up very much sieve speed by removing candidates- break off a chunk that tests in a reasonable time, something convenient for your manual labor. That is, you might sieve 100k to 4M, and then break off 100-150k for LLR while 150-4M keeps sieving. By the time you're up to 500k, you might break off 100k blocks instead- it doesn't matter a whole lot.

LLR maintains an .ini file that includes the line number it has tested. Don't edit the input file when restarting- it will always pick up where it left off.

LaurV seems to be comparing his idea to a plan that never breaks off small pieces when they're "ready" for LLR. I am confident that removing small candidates from the sieve when appropriate is much much faster than LaurV's splitting by N-range. He and I rarely disagree, and he will surely relish showing me I'm mistaken!
VBCurtis is offline   Reply With Quote
Old 2021-02-18, 06:57   #9
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

3·7·19 Posts
Default

I think the breaking-off-or-not depends on the time-scale. Splitting the range will speed up the sieving at the moment. However, in the long run overall sieving time would be shorter when sieving longer ranges. At least that's what I think.


Anyway, one core freed up and I ran a sr1sieve for 2.7e6 < n < 4.1e6 and 314e12 < P < 325e12. Estimation was 24 factors in about 3 days or about 11000 s per candidate removed. To confirm, I ran it for 7 hours, about 10%, and found 4 factors, i.e. 6300 s per candidate removed.


LLR2 takes 2500 s for n = 2.7e6, so I stopped the sieve and continued LLR. It seems for this n-range sieving doesn't make sense anymore. Of course I could extend the sieve, but for now I want to finish n < 4.1e6 and then see how and if I'll continue.

Last fiddled with by bur on 2021-02-18 at 06:59
bur is offline   Reply With Quote
Old 2021-03-09, 14:24   #10
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

3×7×19 Posts
Default All n < 2,400,000 tested

No new primes to report. Is anyone interested in the lresults file with the residues?

Code:
For n < 2,400,000

n =             digits

      2             7 (not a Proth)
    142            49
    202            67
    242            79
    370           118
    578           180
    614           191
    754           233
  6,430         1,942
  7,438         2,246
  7,894         2,383
 10,474         3,159
 11,542         3,481
 45,022        13,559
 46,802        14,095
 70,382        21,194
 74,938        22,565
367,310       110,578
485,014       146,010
Due to the lack of new primes, here are some statistics:

Largest n in progress = 3.20e6
# of digits = 963,000 (rank: ~ 1000)
FFT size = 320k
Avg. time = 3140 s

Smallest n in progress = 2.45e6
# of digits = 737,500 (rank: ~ 1690)
FFT size = 256k
Avg. time = 1935 s

In about 15 days Mega Prime territory will be reached with n = 3.33e6.
In about 30 days all n < 3.2e6 will have been tested.


Btw, since the office got a bit warm I have the CPU running with PPT = 128 W instead of 148 W. This decreased the package temperature from 74 °C to 68-70 °C while increasing computation times only by 1-2 %. That's not a bad tradeoff, I think.
bur is offline   Reply With Quote
Old 2021-04-10, 06:52   #11
bur
 
bur's Avatar
 
Aug 2020
79*6581e-4;3*2539e-3

6178 Posts
Default

No new primes for n < 3,200,000


So I at least updated the stats:

Largest n in progress = 3,575,000
# of digits = 1,075,000 (rank: ~ 405)
FFT size = 386k
Avg. time = 4230 s

Smallest n in progress = 3,200,000
# of digits = 963,500 (rank: ~ 1025)
FFT size = 336k
Avg. time = 3565 s

Around May, 1st all n < 3,600,000 will have been tested.
Sometime in August or September the sieve will be exhausted at n = 4,100,000 which makes it about a year since I started ... and I hope at least one more prime will turn up. :)
bur is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Proth primes ET_ Proth Prime Search 9 2020-10-02 07:11
Proth 2.0 pepi37 Software 10 2020-09-06 17:26
Proth and Riesel Primes lukerichards Number Theory Discussion Group 7 2018-01-20 16:47
(NEW) Proth Primes Section kar_bon Riesel Prime Data Collecting (k*2^n-1) 6 2010-11-25 13:39
some primes I found with Proth.exe last year ixfd64 Lounge 1 2005-09-07 23:42

All times are UTC. The time now is 12:46.


Tue Oct 19 12:46:21 UTC 2021 up 88 days, 7:15, 0 users, load averages: 1.19, 1.19, 1.17

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.