mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   storm5510 (https://www.mersenneforum.org/forumdisplay.php?f=168)
-   -   Riesel Prime Search (https://www.mersenneforum.org/showthread.php?t=25344)

storm5510 2020-03-05 13:38

Riesel Prime Search
 
I got into the Riesel prime search early in February. There was a bit of a learning curve as to how this was done and what software needed to be used.

The main testing application, [I]LLR[/I], is quite straightforward. It simply requires an input file in the proper format. This is a deterministic application, which runs Lucas-Lehmer tests. I started out with [I]PFGW.[/I] It is probabilistic, running PRP tests on candidates. For this project, the results must be absolute. No probably. Very early on, I was instructed to use [I]LLR[/I] and [U]not[/U] [I]PFGW[/I]. The difference really didn't click with me at the start. It didn't take long to realize the need for [I]LLR[/I]. I run LLR on two machines: One i7 and one i5. There is a speed difference. I have found a way to compensate for this. I wrote a small console binary which splits the sieve results into two separate files. 60% goes to the i7 and the other 40% to the i5. It works quite well. I also wrote a Windows GUI variant which performs the same exact task.

Sieving: This is still in experiment mode, more or less. I started out sieving with [I]NewPGen[/I]. Several members here indicated that I need to be using the "sr" family of programs. These include [I]srsieve, srsieve2, sr1sieve[/I], and [I]sr2sieve[/I]. These are faster. The last two have an option to run low-priority and I use it.

The question then became, "How far do I need to sieve?" Some reading in the forums indicated many had their own ways. Some like to run to a set [I]p[/I] limit. An example would be 700e9 (700-billion). This route can be, and most often is, very time consuming. Two members I corresponded with suggested that I do this as a function of time, not size. Most sieve applications have a "removal rate," the interval in seconds at which factors are found. The longer the run, the lower the rate becomes. A removal rate of 60 seconds is much lower than a rate of 6 seconds.

The next issue was where to sieve. Running [I]LLR[/I] and sieving on the same machine creates a significant bottleneck for [I]LLR[/I]. Doing so added an extra 10 to 12 seconds for each [I]n[/I] in the input file. I had a third option, a Dell laptop. It is a dual-core i5. This laptop [U]was not[/U] designed to do this sort of work. So, it is low priority. Heat is problematic for laptops. It sits on a cooling device with five fans. This made a real difference. This is where the sieving process has nested. It is not fast, but it does a good job. I am running [I]NewPGen[/I] on it for several reasons. One, it is a GUI application which is easy to use, and for me to see. Two, it has several options to limit the sieving time: Removal rate, [I]p[/I] size, and the number of hours. Finally, there is no need to rush to save time for the two [I]LLR[/I] machines. Not pushing the laptop beyond its capability is my primary concern. I do not want to shorten its life more than necessary.

And so it goes...


All times are UTC. The time now is 19:26.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.