mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Software (https://www.mersenneforum.org/forumdisplay.php?f=10)
-   -   Prime95 v30.4/30.5/30.6 (https://www.mersenneforum.org/showthread.php?t=26376)

Happy5214 2021-03-13 20:15

[QUOTE=Viliam Furik;573612]I think you would get a credit.[/QUOTE]
I know. I meant which one. If it weren't for that winter storm that ravaged Texas, I'd have probably reached 27th place on the all-time PRP-CF-DC rankings. If it counted toward that, I might be able to justify the extra 3 or so hours of work on my laptop to run a type-5.

Viliam Furik 2021-03-13 20:56

In that case, I am not sure... Both seem logical, for different reasons, but my guess would be DC since it already has some results.

I've checked my results from CF type 5 after a newly found factor (there were already 4, I found the 5th). Primenet registered them as double-check because there already were some results with fewer factors.

So most probably DC.

tha 2021-03-18 10:19

I still use 30.4 and found a reproducible way to cause a segmentation fault resulting in a core dump. Don't know if 30.5 fixes this, I can try, but not this week.

- Add this line to worktodo.txt:

[CODE]Pminus1=1,2,9221683,-1,1080000,21600000,69[/CODE]

- Start mprime -m

- Press ^C whilst in stage 2 well after the init stage.

- Choose option 5: quit mprime.

- Change the line in worktodo.txt to

[CODE]Pminus1=1,2,9221683,-1,2000000,21600000,69[/CODE]

- Restart mprime -m

Result:

[CODE]
[Work thread Mar 18 08:45] M9221683 stage 1 complete. 3931780 transforms. Time: 873.316 sec.
[Work thread Mar 18 08:45] With trial factoring done to 2^69, optimal B2 is 81*B1 = 162000000.
[Work thread Mar 18 08:45] If no prior P-1, chance of a new factor is 8.42%
Segmentation fault (core dumped)
henk@Z170:~/mersenne$ ./mprime -m
[/CODE]

Restarting mprime -m leads to same result at same point in execution.
Renaming the file solves the issue:

[CODE]mv m9221683 copy_of_m9221683[/CODE]

tha 2021-03-18 15:47

[QUOTE=tha;574013]I still use 30.4 and found a reproducible way to cause a segmentation fault resulting in a core dump. Don't know if 30.5 fixes this, I can try, but not this week.
[/QUOTE]

Confirm on 30.5

Prime95 2021-03-18 21:37

[QUOTE=tha;574046]Confirm on 30.5[/QUOTE]

Will fix in 30.5 build 2

Prime95 2021-03-21 05:58

30.5 build 2 available. It fixes the one reported bug.

kruoli 2021-03-23 14:05

When getting P-1 assignments from PrimeNet, is it expected to not trigger the special B2 selection? Example: [M]M103163903[/M]. With Pfactor I'm getting B1=819,000, B2=39,053,000. When manually editing worktodo.txt to the new format of Pminus1 with the B1 from above, I got B2=51*B1=41,769,000. The values are close, but not identical; I have not yet tried other amounts of RAM and would assume that the B2's can differ much more with other settings.

It occurred to me, that stage 2 takes more than double the time of stage 1 in my case, given the bounds above. The system is an AMD 3800X, 32 GB of 3,600 MHz RAM, 16 GB allocated to Prime95, version 30.5b2, only one worker using eight cores. Is this also intentional? I always thought optimal P-1 (timewise) was to have equal time spent on both stages. I understand that it is impossible to estimate this in code for all the different hardware that is out there, but would it be possible to have a parameter for manipulating the B2? E.g. Stage2EffortFactor: 1 would be Prime95's full automatic selection, 0.5 would result in a B2 such that stage 2 takes around half the time. Having that value, one could increase B1 and lower B2 (in my personal case) such that the most efficient work is done per time unit. So that value could also influence the optimal B1 and B2 selection. Of course that only makes sense when my assumption is correct that there should be equal time spent on both stages.

Prime95 2021-03-23 18:05

[QUOTE=kruoli;574423]When getting P-1 assignments from PrimeNet, is it expected to not trigger the special B2 selection? Example: [M]M103163903[/M]. With Pfactor I'm getting B1=819,000, B2=39,053,000. When manually editing worktodo.txt to the new format of Pminus1 with the B1 from above, I got B2=51*B1=41,769,000. The values are close, but not identical; I have not yet tried other amounts of RAM and would assume that the B2's can differ much more with other settings.[/quote]

Pfactor uses slightly different optimization criteria than "Pminus1=". Pfactor is optimizing for minimizing the total time spent doing P-1, LL, and DC. That is, it is maximizing the LL/DC CPU savings per unit of P-1 work invested. "Pminus1=" is maximizing the number of factors found per unit of P-1 work invested.

[quote]It occurred to me, that stage 2 takes more than double the time of stage 1 in my case, given the bounds above. The system is an AMD 3800X, 32 GB of 3,600 MHz RAM, 16 GB allocated to Prime95, version 30.5b2, only one worker using eight cores. Is this also intentional? I always thought optimal P-1 (timewise) was to have equal time spent on both stages. I understand that it is impossible to estimate this in code for all the different hardware that is out there, but would it be possible to have a parameter for manipulating the B2? E.g. Stage2EffortFactor: 1 would be Prime95's full automatic selection, 0.5 would result in a B2 such that stage 2 takes around half the time. Having that value, one could increase B1 and lower B2 (in my personal case) such that the most efficient work is done per time unit. So that value could also influence the optimal B1 and B2 selection. Of course that only makes sense when my assumption is correct that there should be equal time spent on both stages.[/QUOTE]

The equal time rule was a general rule of thumb that may have worked back in the day. Prime95 carefully counts every stage 1 and stage 2 transform in making it's decisions.

Falkentyne 2021-04-02 06:43

[QUOTE=Prime95;574261]30.5 build 2 available. It fixes the one reported bug.[/QUOTE]

Thank you for all the support and updating of Prime95! I think it's been 20 years now since I first used it...was it on a Pentium 3 Coppermine or something? My god ...

axn 2021-04-05 14:28

George, Would it be possible to reduce the size of P-1 stage 2 checkpoint files? Using this with colab / google drive, it takes a very long time to stop/restart during stage 2 - it writes 100-200 MB of save file. I am guessing it is somehow saving the stage 2 prime bitmap or something? I think it would be faster to recompute the state, rather than load it from disk (with google drive).

James Heinrich 2021-04-05 14:32

[QUOTE=axn;575242]I think it would be faster to recompute the state, rather than load it from disk (with google drive).[/QUOTE]Conversely on a local system it works quickly. Any such changes should probably be an optional codepath for special use-cases and not affect the majority of users who don't read/write from Drive.


All times are UTC. The time now is 17:17.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.