mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Marin's Mersenne-aries (https://www.mersenneforum.org/forumdisplay.php?f=30)
-   -   processed dc and tc posts (https://www.mersenneforum.org/showthread.php?t=24152)

Mark Rose 2019-03-06 21:40

processed dc and tc posts
 
You may wish to not delete claims immediately lest they not be completed. It happens from time to time.

Uncwilly 2019-03-07 14:56

processed dc and tc posts
 
a collection thread for the posted cleared from the DC and TC thread.

LaurV 2019-05-07 10:18

[QUOTE=Uncwilly;515736]You are a supermod. Next time you can put it on the list yourself. GP2 and ATH have been editing the lists too.[/QUOTE]
Which is a bit frown about, in this part of the lake... I would not recommend that all of us who, due to benevolence of the admin, appeared like the mushrooms after the rain, we start now modifying the assignment lists and so on. For many projects we have no much idea what the guyz and galz are doing there... Just my two pence... Job of the mod is to be arbiter, intervene if some sheep happens, but let the projects managers do their job... they know better what to do and what is requested, etc...

Uncwilly 2019-05-07 16:10

[QUOTE=LaurV;516000]Which is a bit frown about, in this part of the lake... I would not recommend that all of us who, due to benevolence of the admin, appeared like the mushrooms after the rain, we start now modifying the assignment lists and so on.[/QUOTE]
Sure, but I think that I am ok with Luigi doing it. George, Alex, and some others I would have no problem with.

You, on other hand, must beg for assignments and pay a steep fee.

LaurV 2019-05-09 13:45

[QUOTE=Uncwilly;516045]You, on other hand, must beg for assignments and pay a steep fee.[/QUOTE]
Haha, ok, ok, you won't see me here, at least, no until [URL="https://www.mersenne.org/report_exponent/?exp_lo=666666667&full=1"]this[/URL] ends (ETA ~20 days, over 90% done, LL+DC in parallel).

kriesel 2019-05-17 15:04

1 Attachment(s)
[QUOTE=ATH;517009]You should register them in Primenet as well, everyone might not read this thread.[/QUOTE]I would if I could. I checked they are not assigned to anyone else, via exponent details, individually, then tried to manually reserve them, individually, and got error messages.

Edit: these are being run in CUDALucas on two fast gpus. Should be done in a few days. The method I described above usually works with assignments for gpus. The error message that occurs for manual checkout of these is:
[CODE]Error code: 40

Error text: No assignment available meeting CPU, program code and work preference requirements[/CODE]

Uncwilly 2019-05-17 15:23

[QUOTE=ATH;517009]You should register them in Primenet as well, everyone might not read this thread.[/QUOTE]
[QUOTE=kriesel;517012]I would if I could. I checked they are not assigned to anyone else, via exponent details, individually, then tried to manually reserve them, individually, and got error messages.[/QUOTE]
From the first post in this thread.
[QUOTE=Uncwilly;510268]The easy way to get some assignments (with Prime95):[LIST][*]Grab the lines of the exponents that you will do.[*]Stop Prime95.[*]Edit your worktodo.txt, inserting the new lines, save it, and close the editor.[*]Restart Prime95[*]Go to the menu Advanced-> Manual Communication[*]Make sure "Contact PrimeNet server now" and "Send new expected completion dates to server" are checked. Hit OK[*]After Prime95 is done communicating, look and see if any of the new exponents don't have an Assignment ID (because they were already assigned.) If they don't, stop Prime95, reopen the worktodo.txt, remove those lines, save, then restart Prime95[/LIST][/QUOTE]

That should do it. mprime probably has a similar technique.

kriesel 2019-05-17 16:41

[QUOTE=Uncwilly;517015]From the first post in this thread.

That should do it.[/QUOTE]It does indeed. Although for gpu usage, and my crappy slow ISP, there are additional steps:
repeat the manual communication in prime95 as needed until communication completes successfully instead of giving CURL library errors;
stop prime95, cut the doublecheck assignments with new AIDs from the worktodo.txt of the prime95 instance used to reserve them, restart prime95;
stop the CUDALucas session, paste the doublecheck assignments with newly issued AIDs into the gpu worktodo.txt, and restart the CUDALucas session;
update local assignment tracking log.

Effective, though a bit tedious compared to my usual manual checkout method when it works.
I wonder if primenet will get confused by these 4 assignments completing and being reported manually in a few days, after being reserved on an old i3 laptop that could not finish one of the assignments in 3 months, and having been removed from the worktodo.txt of the regularly reporting i3 system.

kriesel 2019-05-23 15:14

methods for gpus
 
[QUOTE=Uncwilly;510268]
The easy way to get some assignments (with Prime95):
[LIST][*]Stop Prime95.[*]Edit your worktodo.txt, inserting the new lines, save it, and close the editor.[*]Restart Prime95[*]Go to the menu Advanced-> Manual Communication[*]Make sure "Contact PrimeNet server now" and "Send new expected completion dates to server" are checked. Hit OK[*]After Prime95 is done communicating, look and see if any of the new exponents don't have an Assignment ID (because they were already assigned.) If they don't, stop Prime95, reopen the worktodo.txt, remove those lines, save, then restart Prime95[/LIST] If someone can post tips for the GPU's that would help.[/QUOTE]The following assumes gpu application(s) CUDALucas etc. are already running and are not near completion of the current work item(s). Add gpu application stops and starts as appropriate if that assumption is not valid.
[LIST][*]Post which DC or TC candidates you're taking.[*]Grab the lines of the exponents that you will do.[*]Edit your gpu(s)' worktodo.txt, inserting the new lines, save it, and close the editor.[/LIST]continue, with either Method 1:[LIST][*]Go to [URL]https://www.mersenne.org/manual_assignment/[/URL][*]Set preferred work type to Double check ll tests[*]set Optional exponent range fields to one less, and one more than the exponent in the first worktodo entry added that does not yet have an AID[*]click Get Assignments[*]If it succeeds, copy the assignment with AID, and overwrite the corresponding gpu worktodo.txt entry that had no AID, with it, and save.[*]If it fails, likely it is with error 40, no assignment available. Use a prime95 session as uncwilly described, to get the assignment, then copy that to the gpu worktodo.txt, again, overwriting the no-AID version entry.[*]Repeat previous steps of method 1 for each DC or TC. Depending on how many you're doing, it can get tedious.[/LIST]or continue with Method 2:
[LIST][*]reserve as described by uncwilly in a prime95 session.[*]Copy the new assignments with AIDs from the prime95 worktodo file to the gpu(s) worktodo file(s), and save, then close the gpu worktodo file(s).[*]Move the new assignments in the prime95 worktodo file, to the end of the prime95 worktodo.txt file behind some ongoing lengthy work intended for the cpu, save the prime95 worktdo file, stop and restart prime95, and check with Test, Status, that the assignments order in effect on the cpu is as intended.[/LIST]For either method, wrapup:
[LIST][*]If you're unable to get a reservation for any exponent you posted you were taking, by either method, edit your initial post to exclude it. If the edit period already expired, post which you were unable to reserve in a new message.[/LIST]Optionally: if any gpu assignment is about to expire, because manual extension does not work, try
[LIST][*]Open the prime95 worktodo file, temporarily put its worktodo entry at the front of the prime95 work list, save prime95 worktodo file, stop and restart prime95 computation to run [B][COLOR="Red"]at least 0.1% of completion.[/COLOR][/B][*]Go to the prime95 menu Advanced-> Manual Communication[*]Make sure "Contact PrimeNet server now" and "Send new expected completion dates to server" are checked. Click OK, and wait for prime95 to finish talking with the primenet server[*]Stop prime95, move the extended assignment to the end of the prime95 worktodo file, save and close prime95 worktodo file, restart prime95, check with Test, Status.[*]After the gpu assignment completes, remove the corresponding assignment entry from the prime95 worktodo file, save and close the prime95 worktodo file, stop computation and resume in prime95, delete the prime95 save file and backup files for the few dozen iterations performed on the cpu for the completed gpu assignment.[/LIST] A variant of this extension method seemed to work, in which I commented out the prime95 assignment copy and deleted the exponent's save file and bu files, on an exponent I had that was going to expire yesterday. It gained 10 weeks in the assignments report. But I see today that it is marked expired for me and assigned to someone else. Its chance of working to maintain the extension may be better if it remains in the prime95 instance's worktodo as an active entry that will be included in periodic automatic reporting of progress. By putting it at the end of the work list, behind a lengthy DC I'm running on the prime95 instance, it should report status periodically from there, and yet not use up cpu cycles partially duplicating gpu work.

A side effect of using this method for extension is it changes the assignment from manual to the prime95 instance used to extend it. At least that is what the assignments page shows.

Because this assignment or extension method might confuse Primenet about what's happening on the prime95 instance, and any expirations may count against that instance/system in getting new assignments, consider using an old slow cpu for making these reservations or extensions, where it will make little difference regarding assignment of future work via primenet.

GP2 2019-05-30 08:54

[QUOTE=Uncwilly;518085]GP2 can you handle the honours of posting the worktodo lines? I have no clue about them.[/QUOTE]

Lines suitable for doing a double-check with mprime would be:
[CODE]
PRP=1,2,87667499,-1,76,0,3,4
PRP=1,2,87595463,-1,76,0,3,4
[/CODE]

I assume gpuOwL can understand the same format.

kriesel 2019-06-10 12:44

methods for gpus (amended)
 
[QUOTE=kriesel;517563]The following assumes gpu application(s) CUDALucas etc. are already running and are not near completion of the current work item(s). Add gpu application stops and starts as appropriate if that assumption is not valid.
[LIST][*]Post which DC or TC candidates you're taking.[*]Grab the lines of the exponents that you will do.[*]Edit your gpu(s)' worktodo.txt, inserting the new lines, save it, and close the editor.[/LIST]continue, with either Method 1:[LIST][*]Go to [URL]https://www.mersenne.org/manual_assignment/[/URL][*]Set preferred work type to Double check ll tests[*]set Optional exponent range fields to one less, and one more than the exponent in the first worktodo entry added that does not yet have an AID[*]click Get Assignments[*]If it succeeds, copy the assignment with AID, and overwrite the corresponding gpu worktodo.txt entry that had no AID, with it, and save.[*]If it fails, likely it is with error 40, no assignment available. Use a prime95 session as uncwilly described, to get the assignment, then copy that to the gpu worktodo.txt, again, overwriting the no-AID version entry.[*]Repeat previous steps of method 1 for each DC or TC. Depending on how many you're doing, it can get tedious.[/LIST]or continue with Method 2:
[LIST][*]reserve as described by uncwilly in a prime95 session.[*]Copy the new assignments with AIDs from the prime95 worktodo file to the gpu(s) worktodo file(s), and save, then close the gpu worktodo file(s).[*]Move the new assignments in the prime95 worktodo file, to the end of the prime95 worktodo.txt file behind some ongoing lengthy work intended for the cpu, save the prime95 worktdo file, stop and restart prime95, and check with Test, Status, that the assignments order in effect on the cpu is as intended.[/LIST]For either method, wrapup:
[LIST][*]If you're unable to get a reservation for any exponent you posted you were taking, by either method, edit your initial post to exclude it. If the edit period already expired, post which you were unable to reserve in a new message.[/LIST]Optionally: if any gpu assignment is about to expire, because manual extension does not work, try
[LIST][*]Open the prime95 worktodo file, temporarily put its worktodo entry at the front of the prime95 work list, save prime95 worktodo file, stop and restart prime95 computation to run [B][COLOR=Red]a couple dozen iterations[/COLOR][/B],[*]Go to the prime95 menu Advanced-> Manual Communication[*]Make sure "Contact PrimeNet server now" and "Send new expected completion dates to server" are checked. Click OK, and wait for prime95 to finish talking with the primenet server[*]Stop prime95, move the extended assignment to the end of the prime95 worktodo file, save and close prime95 worktodo file, restart prime95, check with Test, Status.[*]After the gpu assignment completes, remove the corresponding assignment entry from the prime95 worktodo file, save and close the prime95 worktodo file, stop computation and resume in prime95, delete the prime95 save file and backup files for the few dozen iterations performed on the cpu for the completed gpu assignment.[/LIST] A variant of this extension method seemed to work, in which I commented out the prime95 assignment copy and deleted the exponent's save file and bu files, on an exponent I had that was going to expire yesterday. It gained 10 weeks in the assignments report. But I see today that it is marked expired for me and assigned to someone else. Its chance of working to maintain the extension may be better if it remains in the prime95 instance's worktodo as an active entry that will be included in periodic automatic reporting of progress. By putting it at the end of the work list, behind a lengthy DC I'm running on the prime95 instance, it should report status periodically from there, and yet not use up cpu cycles partially duplicating gpu work.

A side effect of using this method for extension is it changes the assignment from manual to the prime95 instance used to extend it. At least that is what the assignments page shows.

Because this assignment or extension method might confuse Primenet about what's happening on the prime95 instance, and any expirations may count against that instance/system in getting new assignments, consider using an old slow cpu for making these reservations or extensions, where it will make little difference regarding assignment of future work via primenet.[/QUOTE]
Substitute "at least 0.1% of completion" in place of the phrase highlighted in red.

Uncwilly 2019-06-10 13:46

[QUOTE=kriesel;519000]Substitute "at least 0.1% of completion" in place of the phrase highlighted in red.[/QUOTE]
You needed to quote the whole thing to change 1 clause?

kriesel 2019-06-10 14:52

[QUOTE=Uncwilly;519007]You needed to quote the whole thing to change 1 clause?[/QUOTE]
As a convenience to readers, all the gpu related information was preserved in one post. I suppose I could have done it not as a quote, but as an amended repost. Lacking moderator privileges here, I could not do as you did, modify post 5. I was not counting on a moderator to change post 5. Post 6 was intended for readers with gpus. I don't understand why you object to a full quote with color highlighting that makes it very easy to see where the difference lies.
(Do you prefer extremely terse expression of such, for example "@post 5, s/a couple dozen iterations/at least 0.1% of completion/", which makes finding the change location as inconvenient as practical?)

If conciseness is your goal, please remove posts 6 7 and 8 (this one), and perhaps also the color highlighting in 5.

kriesel 2019-06-22 16:36

[QUOTE=GP2;511979]PRP checks
[B]...[/B]
(Type-1 residues, use mprime, [B]don't use gpuOwL[/B] because most versions can't do Type-1)
[B]...[/B]
Please [COLOR=Red][B]use gpuOwL[/B][/COLOR] to test these
[/QUOTE]Gpuowl V0.7 to V3.9 does PRP residue type 1; Gpuowl V4.x to V6.5 does PRP residue type 4 (and in v4.x-5.0, has PRP-1 residue type 0 option). See [URL]https://www.mersenneforum.org/showpost.php?p=519603&postcount=15[/URL] The switch to type 4 occurred when Preda was adding P-1 capability. I believe all PRP-capable gpuowl versions to date do offset zero only.

kriesel 2019-06-22 19:26

I'm tackling Doublecheck=[M]50878981[/M],74,1 (LL in gpuowl v0.6 with Jacobi check)
Re my post 7, I'm not sure there was a gpuowl version designated v0.7 (or 0.8 or 0.9). I've confirmed a v1.0 was announced by Preda.
[URL]https://www.mersenneforum.org/showpost.php?p=466649&postcount=186[/URL]

kriesel 2019-06-24 14:04

oops
 
Gpuowl V0.7 to 1.1 and perhaps later does PRP residue type 4; v1.5 & perhaps earlier to V3.9 does PRP residue type 1; Gpuowl V4.x to V6.5 does PRP residue type 4 (and in v4.x-5.0, has PRP-1 residue type 0 option). See [URL]https://www.mersenneforum.org/showpost.php?p=519603&postcount=15[/URL] The switch to type 4 occurred when Preda was adding P-1 capability. I believe all PRP-capable gpuowl versions to date do offset zero only.

moebius 2019-07-18 11:31

[QUOTE=kriesel;519986]Gpuowl V0.7 to 1.1 and perhaps later does PRP residue type 4; v1.5 & perhaps earlier to V3.9 does PRP residue type 1; Gpuowl V4.x to V6.5 does PRP residue type 4 (and in v4.x-5.0, has PRP-1 residue type 0 option). See [URL]https://www.mersenneforum.org/showpost.php?p=519603&postcount=15[/URL] The switch to type 4 occurred when Preda was adding P-1 capability. I believe all PRP-capable gpuowl versions to date do offset zero only.[/QUOTE]


It now seems to be again residue-type 1.

[url]https://mersenneforum.org/showpost.php?p=521194&postcount=1273[/url]

kriesel 2019-08-02 19:59

[QUOTE=moebius;521847]It now seems to be again residue-type 1.
[/QUOTE]Yes. Content at my link posted earlier was updated. The plan is to continue to update blog content in place in the future, so the links scattered about remain valid pointers to current and updated reference data. Feel free to PM me with any link for which the content becomes stale.

Prime95 2019-08-05 14:26

Can someone please DC this one: M89657587
For some reason prime95 calculated an incorrect checksum in the results line. Very weird. I've got several more like it if this one fails DC.

GP2 2019-08-05 15:00

[QUOTE=Prime95;523139]Can someone please DC this one: [M]M89657587[/M]
For some reason prime95 calculated an incorrect checksum in the results line.[/QUOTE]

Which version of prime95 returns a zero shift count??

kriesel 2019-08-16 15:53

[QUOTE=Uncwilly;523181]Can you post a single post that handles all of the data that is needed for the GPUs. And if you have some updates on GP2's post, can you spell it out all nice and neat? That way I can update the posts (or replace them with the new one.) That way we can keep the thread tidy. We can even break out my how-to info into the same post as the GPU how to. I tried to keep it short and not worry about covering -everything-, just enough for normal cases.[/QUOTE]
There's a revised somewhat streamlined version at [URL]https://www.mersenneforum.org/showpost.php?p=519834&postcount=21[/URL]. Just link to that. Something like:[QUOTE]For gpu runs, see [URL]https://www.mersenneforum.org/showpost.php?p=519834&postcount=21[/URL] and follow links it contains as needed.[/QUOTE]The gpu directions are longer than for the prime95/mprime case, partly because the gpu situation is more complicated than the prime95/mprime case. I may reduce its size by splitting off the optional extension portion to a separate linked post.

Feel free to paste the above quote line containing a link into your leading post in place of "If someone can post tips for the GPU's that would help." Then perhaps remove my posts in this thread about it.

It would be good also to modify GP2's post to specify selecting a gpuowl version matching the PRP residue type required, and indicate which residue type is needed. "V4.x or higher" probably gave residue type 4 at the time it was written, but that's no longer the case, because while earlier commits of v6.5 do type 4, v6.5-84-g30c0508 does type 1. You can see residue type switches a few times in gpuowl's history, as well as a switch from LL to PRP. (attachment at [url]https://www.mersenneforum.org/showpost.php?p=519603&postcount=15[/url])

Uncwilly 2019-08-16 16:59

[QUOTE=kriesel;523762]It would be good also to modify GP2's post to specify selecting a gpuowl version matching the PRP residue type required, and indicate which residue type is needed. "V4.x or higher" probably gave residue type 4 at the time it was written, but that's no longer the case, because while earlier commits of v6.5 do type 4, v6.5-84-g30c0508 does type 1. You can see residue type switches a few times in gpuowl's history, as well as a switch from LL to PRP. (attachment at [url]https://www.mersenneforum.org/showpost.php?p=519603&postcount=15[/url])[/QUOTE] Can you PM GP2 about this and work it out? My current knowledge about residue types, etc. is nearly non-existent.

kriesel 2019-08-16 19:59

[QUOTE=Uncwilly;523767]Can you PM GP2 about this and work it out? My current knowledge about residue types, etc. is nearly non-existent.[/QUOTE]
Will do. I see you did the post one edit I suggested, and more. One bit of lint:
sentence five there, "show [B]nor[/B] progress" I think you mean [B]no[/B] not [B]nor[/B].

Uncwilly 2019-08-30 04:04

[QUOTE=Uncwilly;524799]Also, normally people don't post about finishing, unless the OP of the exponent makes mention that they are concerned about a machine's stability..[/QUOTE][QUOTE=mrh;524816]Ok. I tried in a post on 8/15. Oh well, I'll go try something else.[/QUOTE][QUOTE=nomead;524826]Yeah but in addition to posting, you also need to register the assignment in Primenet.[/QUOTE]To make things clearer: Yes, you stated that you would run the number. But, since it wasn't registered on PrimeNet, progress on it was not able to be tracked. When you posted on 8/15 that it was completed, I did not register it as being significant in my mind, because nominally we don't report completion (as noted above.) I had moved the other post with the claim over to the other thread. There was no intentional slight.

Registering exponents prevents duplication of effort.

nomead, it registering is important, but checking in the results is sufficient to clear it.

mrh, Hang around and do what you find fun.

moebius 2019-10-27 16:55

1 Attachment(s)
A leftover last LL exponent of my flaky Phenom processor should be verified sooner. (Error ratio see PDF document). Maybe somebody wants to take this one.

[URL="https://www.mersenne.org/report_exponent/?exp_lo=81269471&exp_hi=&full=1"]81269471[/URL]

ewmayer 2019-12-27 03:45

1 Attachment(s)
[url=https://www.mersenne.org/report_exponent/?exp_lo=96365419&full=1]96365419[/url] is my first non-DC PRP test using Mlucas v19 ... this ran on my unstable Haswell, lots of random-error quits, but now I always run two 4-thread jobs at same time on that machine, if one dies other can continue. Logfile attached, the timings ~17ms/iter were with job getting ~90% cycles, the ones ~30 ms the job was getting ~50% cycles on this system. Early in the run you can see it emitting ROE = 0.4375 warnings and getting stuck in an infinite loop, that was the reason for my 3 Dec patch to the original 1 Dec v19 releae. So lotsa logfile mess, but no Gerbicz-check errors. An early DC using a different program would be appreciated.

kriesel 2019-12-27 13:25

[QUOTE=ewmayer;533615][URL="https://www.mersenne.org/report_exponent/?exp_lo=96365419&full=1"]96365419[/URL] is my first non-DC PRP test using Mlucas v19 ... this ran on my unstable Haswell, lots of random-error quits ... An early DC using a different program would be appreciated.[/QUOTE]Which PRP residue type does Mlucas do? Never mind, I see it was type 1. But can't reserve that exponent for DC. Looks like endless_mike is on it.

endless mike 2020-01-07 02:04

[QUOTE=ewmayer][M]96365419[/M] is my first non-DC PRP test using Mlucas v19 ... this ran on my unstable Haswell, lots of random-error quits, but now I always run two 4-thread jobs at same time on that machine, if one dies other can continue. Logfile attached, the timings ~17ms/iter were with job getting ~90% cycles, the ones ~30 ms the job was getting ~50% cycles on this system. Early in the run you can see it emitting ROE = 0.4375 warnings and getting stuck in an infinite loop, that was the reason for my 3 Dec patch to the original 1 Dec v19 releae. So lotsa logfile mess, but no Gerbicz-check errors. An early DC using a different program would be appreciated.[/QUOTE]

[QUOTE=endless mike]I just added it and was coming back to state my claim.[/QUOTE]

[QUOTE=ewmayer]Thanks - I woke up during the night and realized I'd forgot to mask off the last 2 digits of the final Res64 in the logfile I posted, so strictly as a matter of form - this is a scientific project, after all - I suggest we impose on George by having you send him your final checkpoint file when your run completes, so he can re-run the last few thousand iterations and validate the result, like he does for new-prime cases reported to the server.

(That is assuming your result matches mine - if not, it won't be any issue, and instead we'll be needing a 3rd run).[/QUOTE]

It looks like we matched. If someone could explain to me what to send to George, that would be great.

ATH 2020-01-07 04:56

[QUOTE=endless mike;534455]It looks like we matched. If someone could explain to me what to send to George, that would be great.[/QUOTE]

The newest p96365419.* file, so the last iterations can be rerun. The problem is the file is around 45-48 MB so cannot be sent by email, not sure if you have a way to send it or host it.

endless mike 2020-01-19 02:13

Claimed this one.
Is there a post that describes what all the fields are in a worktodo entry for PRP? I always guess and hope I did it right.

[QUOTE=GP2;511979]PRP checks

EWM: My Haswell-quad run of [url=https://www.mersenne.org/report_exponent/?exp_lo=96365519&full=1]96365519[/url] was hit by the Mlucas v19 PRP-residue postprocessing bug reported [url=https://mersenneforum.org/showthread.php?t=25117]here[/url]. I've re-run the final Kiter using a new build with the bugfix and am 99.9999999% sure that the actual PRP-residue data were unaffected, but better safe than sorry ... and we can use more PRP-DC data for error-stats-gathering anyway.[/QUOTE]

ewmayer 2020-01-19 02:57

[QUOTE=endless mike;535486]Claimed this one.
Is there a post that describes what all the fields are in a worktodo entry for PRP? I always guess and hope I did it right.[/QUOTE]

Thanks - here is my original workdtodo entry, I presume/hope you can simply re-use it, i.e. assignment ID doesn't change from 1st-time test to DC:
[i]
PRP=EDDC25414116177C4F046D79BE11A463,1,2,96365519,-1,76,0
[/i]
Also, when you say you claimed it, did you actually reserve it via the server, or just by way of raise-hand-here, since the server won't be handing any such large ones out as DCs any time soon?

ewmayer 2020-01-19 19:25

[QUOTE=ATH;535506]PRP=EDDC25414116177C4F046D79BE11A463,1,2,96365519,-1,76,0,3,1

Added the 2 extra arguments that can be in the assignment: ",3,1"[/QUOTE]

Ah, thanks for reminding me of that difference between PRP and PRP-DC, Andreas.

Oh, re. reserving specific exponents for early-DC or (early whatever), the following occurred to me last night, wanted to run it by the folks here to gauge if it would be useful - on a given exponent's status page, if exponent has not been retired (via successful DC), enable a "reserve this exponent for [insert whatever work makes most sense for it]" widget on the status page.

(Actually, with deeper p-1 always being a possibility for not-completely-factored numbers and PRP-C now also being possible for same, it seems the only way to truly 'retire' a non-prime M-number or to probable-completely-factor it, i.e. 1 or more small factors found and PRP-C finds the remaining cofactor to be probable-prime.)

ewmayer 2020-03-05 20:00

[QUOTE=Uncwilly;538926]You just edited a post (no post saying there was fresh work) and did not put the work in a code box. Thus, no one noticed.[/QUOTE]
Ah - I figured the regulars around here would regularly scan the to-do entries in the various placeholder posts and grab the ones that fit their preferences. In future shall supplement basic assignment-needs-DC with new posts containing my long accompanying rambles. :) (Yah, I see your editorial snark).

[QUOTE]I have been off on holiday and then laid low by "what's going around". I had planned to do maintenance work on the lists.[/QUOTE]
Given the Covid-19 pandemic, these days one needs to be careful with verbiage like "I cought what's going around," lest one unduly alarm one's friends and loved ones.

Thanks to Jan S for taking up the 2 96M PRP-3 early-DCs.

ewmayer 2020-03-17 21:05

[QUOTE=kriesel;539975]It's running now, 4% complete, ETA ~9pm US CDT 2020-03-18 which is UTC -0500.[/QUOTE]

Ken, if your GPU client uses the convention that LL initial seed is iteration 0, you can compare interim every-1M-iter Res64s vs mine:
[code][2020-02-11 10:58:54] M50699483 Iter# = 1000000 Res64: FF270405EA0D7239. shift: 18503814
[2020-02-12 01:52:58] M50699483 Iter# = 2000000 Res64: 10BD8630CDC36E73. shift: 28532790
[2020-02-12 18:32:46] M50699483 Iter# = 3000000 Res64: D4F95C0FB91AD1EE. shift: 1090728
[2020-02-13 12:54:54] M50699483 Iter# = 4000000 Res64: 2F3D4D841DD790ED. shift: 32000164
[2020-02-13 18:00:52] M50699483 Iter# = 5000000 Res64: 6762222E6DAC7A3D. shift: 4094697
[2020-02-13 23:10:28] M50699483 Iter# = 6000000 Res64: 0C3E78B1C77FA688. shift: 33080656
[2020-02-14 04:30:42] M50699483 Iter# = 7000000 Res64: 62AFD692A5EC4EB2. shift: 41528603
[2020-02-14 09:48:36] M50699483 Iter# = 8000000 Res64: 9F2AC42F44FA0D49. shift: 20612489
[2020-02-14 16:10:17] M50699483 Iter# = 9000000 Res64: 2572440B76CF7B14. shift: 8838056
[2020-02-15 07:43:44] M50699483 Iter# = 10000000 Res64: 5D942313DA74C513. shift: 18311478
[2020-02-15 23:11:13] M50699483 Iter# = 11000000 Res64: AF441CB0956493CF. shift: 16158596
[2020-02-16 15:03:47] M50699483 Iter# = 12000000 Res64: FD12342A64D97B8F. shift: 27115678
[2020-02-17 06:06:49] M50699483 Iter# = 13000000 Res64: 2E9753D14381B557. shift: 8538514
[2020-02-17 23:27:22] M50699483 Iter# = 14000000 Res64: DE3C314364459C1B. shift: 17100758
[2020-02-18 13:08:14] M50699483 Iter# = 15000000 Res64: 2EE5C7CC8F97B0B9. shift: 19048758
[2020-02-18 18:25:53] M50699483 Iter# = 16000000 Res64: 42ACBD803C14864F. shift: 14740307
[2020-02-18 23:47:06] M50699483 Iter# = 17000000 Res64: 693F9E609D94F89E. shift: 25311876
[2020-02-19 05:12:41] M50699483 Iter# = 18000000 Res64: 91713ADD0ED97C33. shift: 47268271
[2020-02-19 23:09:48] M50699483 Iter# = 19000000 Res64: 277A592C42FACB53. shift: 1811986
[2020-02-20 04:33:52] M50699483 Iter# = 20000000 Res64: A5682A939EF38D9A. shift: 49290389
[2020-02-20 10:01:20] M50699483 Iter# = 21000000 Res64: 8A8B4492FFC5470B. shift: 36614684
[2020-02-20 15:22:51] M50699483 Iter# = 22000000 Res64: D6D3F91689DDFAF1. shift: 32210977
[2020-02-20 20:44:28] M50699483 Iter# = 23000000 Res64: B48AC590FBA75FE2. shift: 28860043
[2020-02-21 02:08:56] M50699483 Iter# = 24000000 Res64: AD49E3830B9218D2. shift: 50445451
[2020-02-21 07:29:53] M50699483 Iter# = 25000000 Res64: E4382FB2661B845A. shift: 366548
[2020-02-21 12:52:39] M50699483 Iter# = 26000000 Res64: 83C74046877BC1D7. shift: 42829265
[2020-02-21 23:36:31] M50699483 Iter# = 27000000 Res64: EB4330B282026832. shift: 20458504
[2020-02-22 16:51:24] M50699483 Iter# = 28000000 Res64: 54A4F4CAADAAE0F9. shift: 38860600
[2020-02-23 10:15:11] M50699483 Iter# = 29000000 Res64: 9259066017B75695. shift: 11551033
[2020-02-24 03:25:44] M50699483 Iter# = 30000000 Res64: FCEE9793433D6108. shift: 34385914
[2020-02-25 16:59:20] M50699483 Iter# = 31000000 Res64: 0082DF9E0893D9D2. shift: 19736803
[2020-03-02 16:31:16] M50699483 Iter# = 32000000 Res64: 9B2D2709D0339C17. shift: 24549533
[2020-03-03 19:11:20] M50699483 Iter# = 33000000 Res64: 7A2717061965634C. shift: 14550787
[2020-03-04 11:28:49] M50699483 Iter# = 34000000 Res64: 265C4029588A33E5. shift: 38862938
[2020-03-05 15:55:23] M50699483 Iter# = 35000000 Res64: C7EA44FB046C7866. shift: 50647604
[2020-03-06 07:03:11] M50699483 Iter# = 36000000 Res64: FA9720C207C67570. shift: 4745807
[2020-03-06 21:57:14] M50699483 Iter# = 37000000 Res64: 00FBF511991AFED1. shift: 28508816
[2020-03-07 18:07:49] M50699483 Iter# = 38000000 Res64: 9E5851EE2357B20F. shift: 13420991
[2020-03-08 10:31:24] M50699483 Iter# = 39000000 Res64: FBE3CB493A25E922. shift: 9572750
[2020-03-09 01:44:31] M50699483 Iter# = 40000000 Res64: B5BA6759F6360A0C. shift: 18147084
[2020-03-09 17:00:40] M50699483 Iter# = 41000000 Res64: C94D0040BBE2D050. shift: 4939962
[2020-03-10 09:58:22] M50699483 Iter# = 42000000 Res64: 4B934CF3A43F88A2. shift: 41353787
[2020-03-11 03:07:06] M50699483 Iter# = 43000000 Res64: 5E7AEE9752819113. shift: 19740705
[2020-03-12 00:55:07] M50699483 Iter# = 44000000 Res64: A53578C09D91C1BE. shift: 15246709
[2020-03-12 23:38:08] M50699483 Iter# = 45000000 Res64: 561A2C7C7ACDCE8A. shift: 40676835
[2020-03-13 21:23:49] M50699483 Iter# = 46000000 Res64: 4DFDE6A9FF0157A9. shift: 20021361
[2020-03-14 12:26:05] M50699483 Iter# = 47000000 Res64: 7B305450D7F703AA. shift: 20321389
[2020-03-15 03:15:30] M50699483 Iter# = 48000000 Res64: 5D2C15BAA7AD490F. shift: 23413838
[2020-03-15 23:24:49] M50699483 Iter# = 49000000 Res64: 5E37DEB52957C078. shift: 14058798
[2020-03-16 14:40:29] M50699483 Iter# = 50000000 Res64: E1293037BCEA6CE7. shift: 44564712[/code]
FYI, reason for the pokiness is that this is my low-priority "Plan B" Mlucas run, in case the full-priority wavefront run it runs alongside (both using all 4 cores of this non-HT quad) on the same CPU flakes out for any reason - a not-uncommon occurrence on this ever-flaky Haswell CPU. None of my other Mlucas-running devices has ever shown this kind of behavioral tempestuousness.

Uncwilly 2020-03-17 21:09

[QUOTE=kriesel;539975]Thanks. I tried to reserve it on the manual page both before and immediately after claiming it in this thread, and the error message I got in both cases was it was not available, even though pulling up the exponent detail showed no one else had it at the time.[/QUOTE]Because you can't do that via the manual page (for exponents in the lowest categories. I popped it into a Prime95 worktodo and did a manual communication with sending new dates.

kriesel 2020-03-17 21:34

[QUOTE=ewmayer;539977]Ken, if your GPU client uses the convention that LL initial seed is iteration 0, you can compare interim every-1M-iter Res64s vs mine:[/QUOTE]
Looking good through 4M, remainder TBD. This is CUDALucas v2.06 on a GTX1080Ti. It does not output the offset value until the result record. It's logging interim residues every 50K iterations, so if we diverge, we might be able to refine when.

[CODE]Starting M50699483 fft length = 2688K
| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 17 14:40:04 | M50699483 1000000 0xff270405ea0d7239 | 2688K 0.26563 2.0281 101.40s | 1:03:59:10 1.97% |
| Mar 17 15:13:53 | M50699483 2000000 0x10bd8630cdc36e73 | 2688K 0.28125 2.0286 101.43s | 1:03:26:06 3.94% |
| Mar 17 15:47:43 | M50699483 3000000 0xd4f95c0fb91ad1ee | 2688K 0.25000 2.0291 101.45s | 1:02:52:47 5.91% |
| Mar 17 16:21:34 | M50699483 4000000 0x2f3d4d841dd790ed | 2688K 0.28125 2.0296 101.48s | 1:02:19:17 7.88% |
[/CODE][QUOTE=Uncwilly;539978]Because you can't do that via the manual page (for exponents in the lowest categories. I popped it into a Prime95 worktodo and did a manual communication with sending new dates.[/QUOTE]Uggh, I think I used to know that.

kriesel 2020-03-18 14:49

Ernst: Do you have interim save files you could resume from? My run is keeping save files spaced every 1M iterations.[QUOTE=ewmayer;539977]Ken, if your GPU client uses the convention that LL initial seed is iteration 0, you can compare interim every-1M-iter Res64s vs mine:
[code][2020-02-11 10:58:54] ...
[2020-03-03 19:11:20] M50699483 Iter# = 33000000 Res64: 7A2717061965634C. shift: 14550787
[2020-03-04 11:28:49] M50699483 Iter# = 34000000 Res64:[B] 265C4029588A33E5[/B]. shift: 38862938
[2020-03-05 15:55:23] M50699483 Iter# = 35000000 Res64: C7EA44FB046C7866. shift: 50647604
[2020-03-06 07:03:11] M50699483 Iter# = 36000000 Res64: FA9720C207C67570. shift: 4745807
[2020-03-06 21:57:14] M50699483 Iter# = 37000000 Res64: 00FBF511991AFED1. shift: 28508816
[2020-03-07 18:07:49] M50699483 Iter# = 38000000 Res64: 9E5851EE2357B20F. shift: 13420991
[2020-03-08 10:31:24] M50699483 Iter# = 39000000 Res64: FBE3CB493A25E922. shift: 9572750
[2020-03-09 01:44:31] M50699483 Iter# = 40000000 Res64: B5BA6759F6360A0C. shift: 18147084
[2020-03-09 17:00:40] M50699483 Iter# = 41000000 Res64: C94D0040BBE2D050. shift: 4939962
[2020-03-10 09:58:22] M50699483 Iter# = 42000000 Res64: 4B934CF3A43F88A2. shift: 41353787
[2020-03-11 03:07:06] M50699483 Iter# = 43000000 Res64: 5E7AEE9752819113. shift: 19740705
[2020-03-12 00:55:07] M50699483 Iter# = 44000000 Res64: A53578C09D91C1BE. shift: 15246709
[2020-03-12 23:38:08] M50699483 Iter# = 45000000 Res64: 561A2C7C7ACDCE8A. shift: 40676835
[2020-03-13 21:23:49] M50699483 Iter# = 46000000 Res64: 4DFDE6A9FF0157A9. shift: 20021361
[2020-03-14 12:26:05] M50699483 Iter# = 47000000 Res64: 7B305450D7F703AA. shift: 20321389
[2020-03-15 03:15:30] M50699483 Iter# = 48000000 Res64: 5D2C15BAA7AD490F. shift: 23413838
[2020-03-15 23:24:49] M50699483 Iter# = 49000000 Res64: 5E37DEB52957C078. shift: 14058798
[2020-03-16 14:40:29] M50699483 Iter# = 50000000 Res64: E1293037BCEA6CE7. shift: 44564712[/code][/QUOTE]
My CUDALucas v2.06 on GTX1080Ti run diverges from yours between 33M and 34M:

[CODE]| Mar 18 07:35:50 | M50699483 31000000 0x0082df9e0893d9d2 | 2688K 0.25000 2.0313 101.56s | 11:06:56 61.14% |
| Mar 18 08:09:39 | M50699483 32000000 0x9b2d2709d0339c17 | 2688K 0.28125 2.0253 101.26s | 10:33:03 63.11% |
| Mar 18 08:43:25 | M50699483 33000000 0x7a2717061965634c | 2688K 0.25000 2.0253 101.26s | 9:59:09 65.08% |
| Mar 18 09:17:11 | M50699483 34000000 [B]0xa22077a84ffa1c25[/B] | 2688K 0.26563 2.0254 101.27s | 9:25:15 67.06% |
[/CODE]At finer granularity, with notes on progress of repeat from 33M:[CODE]| Mar 18 08:45:06 | M50699483 33050000 0x37c64cf8f8be0f90 | 2688K 0.26563 2.0362 101.81s | 9:57:28 65.18% | reproduced
| Mar 18 08:46:47 | M50699483 33100000 0xbb849ff55738c8d5 | 2688K 0.26563 2.0253 101.26s | 9:55:46 65.28% | reproduced
| Mar 18 08:48:29 | M50699483 33150000 0x1cd84913571585e1 | 2688K 0.26563 2.0253 101.26s | 9:54:04 65.38% | reproduced
| Mar 18 08:50:10 | M50699483 33200000 0xef8c9ea2bb982cd2 | 2688K 0.26563 2.0254 101.27s | 9:52:23 65.48% | reproduced
| Mar 18 08:51:51 | M50699483 33250000 0xe80447b573ef030a | 2688K 0.25000 2.0253 101.26s | 9:50:41 65.58% | reproduced
| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 08:53:32 | M50699483 33300000 0x0ec1cc2621d92648 | 2688K 0.26563 2.0254 101.26s | 9:48:59 65.68% | reproduced
| Mar 18 08:55:14 | M50699483 33350000 0xac334a394a51f8df | 2688K 0.28125 2.0253 101.26s | 9:47:17 65.77% | reproduced
| Mar 18 08:56:55 | M50699483 33400000 0x5647ecc4a6bfe11b | 2688K 0.26563 2.0252 101.26s | 9:45:36 65.87% | reproduced
| Mar 18 08:58:36 | M50699483 33450000 0xab3476d0c81b41cc | 2688K 0.25000 2.0252 101.26s | 9:43:54 65.97% | reproduced
| Mar 18 09:00:17 | M50699483 33500000 0x64d625bfc4429707 | 2688K 0.28125 2.0254 101.27s | 9:42:12 66.07% | reproduced
| Mar 18 09:01:59 | M50699483 33550000 0xdf554459fc451fe2 | 2688K 0.28125 2.0253 101.26s | 9:40:31 66.17% | reproduced
| Mar 18 09:03:40 | M50699483 33600000 0x219a46abd5061aa9 | 2688K 0.25000 2.0253 101.26s | 9:38:49 66.27% | reproduced
| Mar 18 09:05:21 | M50699483 33650000 0xccc27947bc7707c8 | 2688K 0.28125 2.0252 101.26s | 9:37:07 66.37% | reproduced
| Mar 18 09:07:03 | M50699483 33700000 0xf77df0cc388e7949 | 2688K 0.26563 2.0254 101.27s | 9:35:26 66.47% | reproduced
| Mar 18 09:08:44 | M50699483 33750000 0xa64fba39022d141f | 2688K 0.28125 2.0254 101.27s | 9:33:44 66.56% | reproduced
| Mar 18 09:10:25 | M50699483 33800000 0x56e89f72185d56f7 | 2688K 0.25000 2.0253 101.26s | 9:32:02 66.66% | reproduced
| Mar 18 09:12:06 | M50699483 33850000 0x0cbcd708bbcfae8b | 2688K 0.26563 2.0253 101.26s | 9:30:20 66.76% | reproduced
| Mar 18 09:13:48 | M50699483 33900000 0x820a85c3d8c9752c | 2688K 0.26563 2.0253 101.26s | 9:28:39 66.86% | reproduced
| Mar 18 09:15:29 | M50699483 33950000 0x54799a3e9105fa36 | 2688K 0.25000 2.0253 101.26s | 9:26:57 66.96% | reproduced
| Mar 18 09:17:11 | M50699483 34000000 0xa22077a84ffa1c25 | 2688K 0.26563 2.0254 101.27s | 9:25:15 67.06% | reproduced
[/CODE]Halting repeat run, resuming from 35M+ save file.

ewmayer 2020-03-18 18:29

Ken, Mlucas saves every-10M savefiles by default. Residue-reporting granularity on runs of <= 4 threads is every 10Kiter, I have narrowed the divergence to the 33.10-33.15Miter interval, the 2 Res64s which bookend that are bolded in my logfile excerpt below:
[code][2020-03-03 20:39:45] M50699483 Iter# = 33100000 Res64: [b]BB849FF55738C8D5[/b]. AvgMaxErr = 0.071159536. MaxErr = 0.109375000. Residue shift count = 9258343.
[2020-03-03 20:48:40] M50699483 Iter# = 33110000 Res64: 5B7BD8EA7D60E545. AvgMaxErr = 0.071040688. MaxErr = 0.109375000. Residue shift count = 24888588.
[2020-03-03 20:57:28] M50699483 Iter# = 33120000 Res64: 98FBFA574398CDC8. AvgMaxErr = 0.071026953. MaxErr = 0.109375000. Residue shift count = 4513422.
[2020-03-03 21:06:14] M50699483 Iter# = 33130000 Res64: DBEC805AFE89F02C. AvgMaxErr = 0.071080518. MaxErr = 0.109375000. Residue shift count = 26304104.
M50699483 Roundoff warning on iteration 33139667, maxerr = 0.500000000000
Retrying iteration interval to see if roundoff error is reproducible.
Restarting M50699483 at iteration = 33130000. Res64: DBEC805AFE89F02C, residue shift count = 26304104
M50699483: using FFT length 2816K = 2883584 8-byte floats, initial residue shift count = 26304104
this gives an average 17.582107197154652 bits per digit
Retry of iteration interval with fatal roundoff error was successful.
[2020-03-03 21:23:43] M50699483 Iter# = 33140000 Res64: 3FE1873AE0CFDBAD. AvgMaxErr = 0.071077051. MaxErr = 0.160156250. Residue shift count = 10073663.
[2020-03-03 21:32:33] M50699483 Iter# = 33150000 Res64: [b]68D413A52601DAB7[/b]. AvgMaxErr = 0.071095947. MaxErr = 0.101562500. Residue shift count = 12690212.[/code]
Note the kind of data-corruption-detected sudden-fatal-ROE as seen above is not at all unusual on my notoriously flaky Haswell system - I find running in such a context to be a valuable QA exercise because it approximates the worst-case scenario users of my code may face with their own hardware. So you can see from the above the program detected a glitch in the matrix, as a result of which it restarted from the iteration = 33130000 savefile, and retry of the ensuing 10Kiter interval was successful. But for every such detected data-corruption error there is a smaller number of 'silent' ones, which are the reason the PRP+Gerbicz option is so valuable on this kind of hardware - my current production runs are doing PRP-tests, this DC was just last of a long-running low-priority LL-DC batch.

Ken, Uncwilly, can either of you rerun the same 50K interval with the finer 10Kiter reporting granularity? It would be intersting to see if the divergence occurred before the above fatal-ROE/retry incident, or after.

In the meantime I've low-prioritized my 2 production tests (one main, one lower-priority 'backup' run in case the first crashes) on the Haswell and restarted the run from the 30M savefile. But weirdly, doing a 'top' just now I see my 2 production runs are listed at the lowest priority (renice -n 19) but are still grabbing cycles at a higher priority than the retry-DC-from-30M run ... 'sudo renice -n -4' of the latter doesn't help, DC run still just getting 1 core's worth of cycles. Frickin' Ubuntu ... so 'fg' both production jobs and ctrl-z to suspend them. That's better ... with ~3Miter to go, my DC-retry run will hit the divergence point in ~7 hours.

kriesel 2020-03-18 19:32

[QUOTE=ewmayer;540068]Ken, Mlucas saves every-10M savefiles by default. Residue-reporting granularity on runs of <= 4 threads is every 10Kiter, I have narrowed the divergence to the 33.10-33.15Miter interval, the 2 Res64s which bookend that are bolded in my logfile excerpt below
...

Ken, Uncwilly, can either of you rerun the same 50K interval with the finer 10Kiter reporting granularity? It would be interesting to see if the divergence occurred before the above fatal-ROE/retry incident, or after.[/QUOTE]Okay. Forked it to a GTX1080 on a different system that was doing interruptible P-1 in gpuowl, CUDALucas v2.06 at 5K iterations console printout, 100K save file interval, from 33M save file.
[CODE]CUDALucas v2.06beta 64-bit build, compiled May 5 2017 @ 13:02:54

binary compiled for CUDA 8.0
CUDA runtime version 8.0
CUDA driver version 8.0

------- DEVICE 0 -------
name GeForce GTX 1080
UUID GPU-5e2c5531-4684-57ec-6393-8b762f286c70
ECC Support? Disabled
Compatibility 6.1
clockRate (MHz) 1797
memClockRate (MHz) 5005
totalGlobalMem 8589934592
totalConstMem 65536
l2CacheSize 2097152
sharedMemPerBlock 49152
regsPerBlock 65536
warpSize 32
memPitch 2147483647
maxThreadsPerBlock 1024
maxThreadsPerMP 2048
multiProcessorCount 20
maxThreadsDim[3] 1024,1024,64
maxGridSize[3] 2147483647,65535,65535
textureAlignment 512
deviceOverlap 1
pciDeviceID 0
pciBusID 4

You may experience a small delay on 1st startup to due to Just-in-Time Compilation

Using threads: square 32, splice 128.

Continuing M50699483 @ iteration 33000001 with fft length 2688K, 65.09% done

| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 14:17:10 | M50699483 33005000 0xdd429c1ccbd7a62a | 2688K 0.25000 2.8121 14.05s | 9:59:01 65.09% |
| Mar 18 14:17:24 | M50699483 33010000 0xff4bbbe125ee1176 | 2688K 0.23438 2.8099 14.04s | 9:58:53 65.10% |
| Mar 18 14:17:38 | M50699483 33015000 0x7bc220867e3d7921 | 2688K 0.25000 2.8094 14.04s | 9:58:45 65.11% |
| Mar 18 14:17:52 | M50699483 33020000 0x4ab93d7fb1b9410a | 2688K 0.25000 2.8088 14.04s | 9:58:37 65.12% |
| Mar 18 14:18:06 | M50699483 33025000 0x0dd3e63997a7117e | 2688K 0.25000 2.8055 14.02s | 9:58:29 65.13% |
| Mar 18 14:18:20 | M50699483 33030000 0xdb45cff854629006 | 2688K 0.25000 2.8100 14.05s | 9:58:21 65.14% |
| Mar 18 14:18:34 | M50699483 33035000 0xc927fe56adabaf33 | 2688K 0.25000 2.8171 14.08s | 9:58:13 65.15% |
| Mar 18 14:18:48 | M50699483 33040000 0x671966f21715efe7 | 2688K 0.25000 2.8176 14.08s | 9:58:05 65.16% |
| Mar 18 14:19:02 | M50699483 33045000 0xc24775f499646abb | 2688K 0.25000 2.8184 14.09s | 9:57:56 65.17% |
| Mar 18 14:19:16 | M50699483 33050000 0x37c64cf8f8be0f90 | 2688K 0.25000 2.8173 14.08s | 9:57:48 65.18% |
| Mar 18 14:19:31 | M50699483 33055000 0xea0b3d9144c22bab | 2688K 0.23438 2.8408 14.20s | 9:57:40 65.19% |
| Mar 18 14:19:45 | M50699483 33060000 0xf4e172ae2efbf3c0 | 2688K 0.25000 2.8356 14.17s | 9:57:32 65.20% |
| Mar 18 14:19:59 | M50699483 33065000 0x72232fa331f55cf9 | 2688K 0.25000 2.8219 14.10s | 9:57:24 65.21% |
| Mar 18 14:20:13 | M50699483 33070000 0x207baea46fe47f92 | 2688K 0.23438 2.8226 14.11s | 9:57:16 65.22% |
| Mar 18 14:20:27 | M50699483 33075000 0x07d271a17b19c3d5 | 2688K 0.21875 2.8245 14.12s | 9:57:08 65.23% |
| Mar 18 14:20:41 | M50699483 33080000 0x1ee9e9c6554ccf1e | 2688K 0.25000 2.8241 14.12s | 9:57:00 65.24% |
| Mar 18 14:20:55 | M50699483 33085000 0xdc08ddc9d7820922 | 2688K 0.25000 2.8248 14.12s | 9:56:52 65.25% |
| Mar 18 14:21:10 | M50699483 33090000 0xd19878f22036fac1 | 2688K 0.21875 2.8245 14.12s | 9:56:44 65.26% |
| Mar 18 14:21:24 | M50699483 33095000 0x3e5e1634ab32f2e3 | 2688K 0.25000 2.8275 14.13s | 9:56:36 65.27% |
| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 14:21:38 | M50699483 33100000 0xbb849ff55738c8d5 | 2688K 0.25000 2.8287 14.14s | 9:56:28 65.28% |
| Mar 18 14:21:52 | M50699483 33105000 0xfe2846d8d8d8e12f | 2688K 0.25000 2.9227 14.61s | 9:56:20 65.29% |
| Mar 18 14:22:07 | M50699483 33110000 0x5b7bd8ea7d60e545 | 2688K 0.25000 2.8426 14.21s | 9:56:12 65.30% |
| Mar 18 14:22:21 | M50699483 33115000 0x3dc9f02704f5e3cc | 2688K 0.25000 2.8421 14.21s | 9:56:04 65.31% |
| Mar 18 14:22:35 | M50699483 33120000 0x98fbfa574398cdc8 | 2688K 0.25000 2.8424 14.21s | 9:55:56 65.32% |
| Mar 18 14:22:49 | M50699483 33125000 0x3c08397825944984 | 2688K 0.25000 2.8407 14.20s | 9:55:48 65.33% |
| Mar 18 14:23:03 | M50699483 33130000 [B][COLOR=SeaGreen]0xdbec805afe89f02c[/COLOR][/B] | 2688K 0.25000 2.8380 14.19s | 9:55:40 65.34% |
| Mar 18 14:23:18 | M50699483 33135000 0xcc37b1c7d24241ee | 2688K 0.25000 2.8364 14.18s | 9:55:32 65.35% |
| Mar 18 14:23:32 | M50699483 33140000 [B][COLOR=Red]0x067c546da1f13507[/COLOR][/B] | 2688K 0.25000 2.8237 14.11s | 9:55:24 65.36% |
| Mar 18 14:23:46 | M50699483 33145000 0x62d3b4b0d086b501 | 2688K 0.26563 2.8250 14.12s | 9:55:16 65.37% |
| Mar 18 14:24:00 | M50699483 33150000 0x1cd84913571585e1 | 2688K 0.23438 2.8247 14.12s | 9:55:08 65.38% |
| Mar 18 14:24:14 | M50699483 33155000 0x5b55b049e6038c81 | 2688K 0.24219 2.8244 14.12s | 9:55:00 65.39% |
| Mar 18 14:24:28 | M50699483 33160000 0x4fa86767dd1f39bf | 2688K 0.25000 2.8245 14.12s | 9:54:52 65.40% |
| Mar 18 14:24:42 | M50699483 33165000 0x2d062794c09182d6 | 2688K 0.23438 2.8246 14.12s | 9:54:43 65.41% |
| Mar 18 14:24:56 | M50699483 33170000 0xf644b10ec263f600 | 2688K 0.25000 2.8246 14.12s | 9:54:35 65.42% |
| Mar 18 14:25:11 | M50699483 33175000 0xf9a9dc418fc724ea | 2688K 0.25000 2.8249 14.12s | 9:54:27 65.43% |
| Mar 18 14:25:25 | M50699483 33180000 0xb461f655a081955b | 2688K 0.25000 2.8244 14.12s | 9:54:19 65.44% |
| Mar 18 14:25:39 | M50699483 33185000 0xf43ce356e8e26ea7 | 2688K 0.22461 2.8258 14.12s | 9:54:11 65.45% |
| Mar 18 14:25:53 | M50699483 33190000 0xba94f36ee57b6775 | 2688K 0.24219 2.8256 14.12s | 9:54:03 65.46% |
| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 14:26:07 | M50699483 33195000 0x13135c7b4ff819a7 | 2688K 0.23438 2.8256 14.12s | 9:53:55 65.47% |
| Mar 18 14:26:22 | M50699483 33200000 [B]0xef8c9ea2bb982cd2[/B] | 2688K 0.25000 2.8248 14.12s | 9:53:47 65.48% |[/CODE]Matches the earlier [B]GTX1080Ti[/B] residues, reposted below:

[CODE]| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 09:57:27 | M50699483 33050000 0x37c64cf8f8be0f90 | 2688K 0.25000 2.0097 100.48s | 9:57:27 65.18% |
| Mar 18 09:59:08 | M50699483 33100000 0xbb849ff55738c8d5 | 2688K 0.25000 2.0190 100.95s | 9:55:45 65.28% |
| Mar 18 10:00:49 | M50699483 33150000 0x1cd84913571585e1 | 2688K 0.26563 2.0231 101.15s | 9:54:03 65.38% |
| Mar 18 10:02:30 | M50699483 33200000 [B]0xef8c9ea2bb982cd2[/B] | 2688K 0.25000 2.0237 101.18s | 9:52:22 65.48% |
[/CODE]CUDALucas is doing well, considering it hasn't even the Jacobi check.

ewmayer 2020-03-18 19:46

Ken, is every-10K reporting an option for you?

kriesel 2020-03-18 20:01

[QUOTE=ewmayer;540079]Ken, is every-10K reporting an option for you?[/QUOTE]Yes, and finer or coarser. If Mlucas can't adjust, you may want to edit mlucas source and recompile so you can look at where the divergence occurs at finer resolution when divergences occur.
Fine/frequent output can be edited down. Coarse/infrequent has to be rerun when questions arise, which might be enlightened by more frequent res64 output.
I was not sure what the lower limit is in CUDALucas. It is apparently a single iteration. I'm not sure what the upper limit is, but I've run 10[SUP]5[/SUP].
From CUDALucas.ini:[CODE]# ErrorIterations tells how often the roundoff error is checked. Larger values
# give shorter iteration times, but introduce some uncertainty as to the actual
# maximum roundoff error that occurs during the test. Default is 100.
# ReportIterations is the same as the -x option; it determines how often
# screen output is written. Default is 10000.
# CheckpointIterations is the same as the -c option; it determines how often
# checkpoints are written. Default is 100000.
# Each of these values should be of the form k * 10^n with k = 1, 2, or 5.

ErrorIterations=100
ReportIterations=50000
CheckpointIterations=1000000[/CODE]A quick experiment shows that ReportIterations = 1 works, although the speed penalty is considerable relative to the 2 ms/iter normal case, and the log file would explode in size:[CODE]| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 14:55:54 | M50699483 42788602 0xc672edd2fec9d543 | 2688K 0.25000 1.#INF 0.00s | 4:27:44 84.39% |
| Mar 18 14:55:54 | M50699483 42788603 0x073e9eb282310661 | 2688K 0.20313 10.4790 0.01s | 4:27:44 84.39% |
| Mar 18 14:55:54 | M50699483 42788604 0xbe3d42c802206844 | 2688K 0.20313 9.8760 0.00s | 4:27:44 84.39% |[/CODE]There's little speed penalty at ReportIterations = 100 or higher.[CODE]Iter time on GTX1080Ti, CUDALucas v2.06, M53M, versus ReportIterations
# ms/iter
1 9.5
10 2.75
100 2.05
1000 2.02
10000 2.02
50000 2.03[/CODE]

kriesel 2020-03-18 21:15

Ironically, Ernst's res64 was:[CODE][2020-03-03 21:23:43] M50699483 Iter# = 33140000 Res64: [B]3FE1873AE0CFD[/B][COLOR=Red][B]BAD[/B][/COLOR]. AvgMaxErr = 0.071077051. MaxErr = 0.160156250. Residue shift count = 10073663.
[/CODE]vs. my CUDALucas output[CODE]| Mar 18 14:23:03 | M50699483 33130000 0xdbec805afe89f02c | 2688K 0.25000 2.8380 14.19s | 9:55:40 65.34% |
| Mar 18 14:23:18 | M50699483 33135000 0xcc37b1c7d24241ee | 2688K 0.25000 2.8364 14.18s | 9:55:32 65.35% |
| Mar 18 14:23:32 | M50699483 33140000 [B]0x067c546da1f13507[/B] | 2688K 0.25000 2.8237 14.11s | 9:55:24 65.36% |
[/CODE]

ewmayer 2020-03-18 21:57

[QUOTE=kriesel;540089]Ironically, Ernst's res64 was:[snip][/QUOTE]

LOL, I hadn't noticed that the Res64 was itself confessing its badness! :) Anyhow, once my re-run-from-30M gets close, I'll save a copy of the restartfile in case I want to re-run a small subinterval with finer-than-10K granularity.

Ken, I assume your DC run has completed or is close? Still don't see your result appearing on the exponent page.

Uncwilly 2020-03-18 22:08

[code][2020-02-13 23:10:28] M50699483 Iter# = 6000000 Res64: 0C3E78B1C77FA688. shift: 33080656[/code]
[CODE][Mar 18 14:19] M50699483 interim LL residue AF0BF1AEDBD6D468 at iteration 6000000[/CODE]
I guess my shift is not zero.

kriesel 2020-03-18 22:11

[QUOTE=ewmayer;540098]LOL, I hadn't noticed that the Res64 was itself confessing its badness! :) Anyhow, once my re-run-from-30M gets close, I'll save a copy of the restartfile in case I want to re-run a small subinterval with finer-than-10K granularity.

Ken, I assume your DC run has completed or is close? Still don't see your result appearing on the exponent page.[/QUOTE]At ~92% now. I'm also throwing it through a PrimeNet-bounds P-1 factoring which is likely to finish first. I will post LL 1M-pitch interim residues here after final residues are compared and results are submitted to PrimeNet. Expect ~6pm Calif time. And I will also state whether uncwilly's fourth-test is needed.

ewmayer 2020-03-18 22:38

[QUOTE=Uncwilly;540100][code][2020-02-13 23:10:28] M50699483 Iter# = 6000000 Res64: 0C3E78B1C77FA688. shift: 33080656[/code]
[CODE][Mar 18 14:19] M50699483 interim LL residue AF0BF1AEDBD6D468 at iteration 6000000[/CODE]
I guess my shift is not zero.[/QUOTE]

Well, that's kinda silly - my Mlucas run also used nonzero shift, but I have the code remove the shift for purposes of Res64 reporting and writing-residue-to-savefile, specifically to ease such side-by-side-run cross-comparison.

But wait - whenever we have a new prime verification George or someone else does a Prime95/mprime run and cross-compares Res64s to one of the other codes. So there must be a reporting option which causes unshifted-residue Res64 vaues to get printed. (But IMO that shouldn't need a special flag to be set).

kriesel 2020-03-18 22:47

[QUOTE=Uncwilly;540100][code][2020-02-13 23:10:28] M50699483 Iter# = 6000000 Res64: 0C3E78B1C77FA688. shift: 33080656[/code][CODE][Mar 18 14:19] M50699483 interim LL residue AF0BF1AEDBD6D468 at iteration 6000000[/CODE]I guess my shift is not zero.[/QUOTE]If using prime95 or mprime, there's an option to print 3 successive residues. That's necessary because those programs start the counter at 2, not 0, ending at p, not p-2. So the usual output is two iterations early. Mlucas and others start at 0, as the Lucas-Lehmer series does; s[SUB]0[/SUB]=4, s[SUB]n+1[/SUB]=(s[SUB]n[/SUB][SUP]2[/SUP]-2) mod Mp.

Res64 values output by the programs are hexadecimal representations of the least significant 64 bits of an LL (or PRP or P-1) iteration result. Matching computation type, input, and iteration number is required, as well as dealing with shift so the LSBs can be located and converted to hex. Off-by-two on iteration count is a difference of long standing, that must be handled by the user of prime95 or mprime reading its output, as I recall. What most programs call iteration x-2, prime95 calls x. So compare prime95 iteration x+2 to Mlucas iteration x or CUDALucas iteration x.

ewmayer 2020-03-18 22:55

[QUOTE=kriesel;540108]If using prime95 or mprime, there's an option to print 3 successive residues. That's necessary because those programs start the counter at 2, not 0, ending at p, not p-2. So the usual output is two iterations early. Mlucas and others start at 0, as the Lucas-Lehmer series does; s[SUB]0[/SUB]=4, s[SUB]n+1[/SUB]=(s[SUB]n[/SUB][SUP]2[/SUP]-2) mod Mp.[/QUOTE]

Ah, I thought Uncwilly already had said option (another "should be the default" item) enabled. So this is the 2-iteration offset, not anything related to the shift, then.

Uncwilly 2020-03-18 23:29

[QUOTE=kriesel;540108]If using prime95 or mprime, there's an option to print 3 successive residues.[/QUOTE]
[QUOTE=ewmayer;540109]Ah, I thought Uncwilly already had said option (another "should be the default" item) enabled. So this is the 2-iteration offset, not anything related to the shift, then.[/QUOTE]It is doing that. I just posted the exact number. I am not near that machine for about 16 hours. I will post the others when I get there (provided that I can.)

kriesel 2020-03-19 00:36

Uncwillly's run is not needed. I have a match to Stephan Grupp's final res64.
CUDALucas V2.06 1M interval outputs below.

[CODE]| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 17 14:40:04 | M50699483 1000000 0xff270405ea0d7239 | 2688K 0.26563 2.0281 101.40s | 1:03:59:10 1.97% |
| Mar 17 15:13:53 | M50699483 2000000 0x10bd8630cdc36e73 | 2688K 0.28125 2.0286 101.43s | 1:03:26:06 3.94% |
| Mar 17 15:47:43 | M50699483 3000000 0xd4f95c0fb91ad1ee | 2688K 0.25000 2.0291 101.45s | 1:02:52:47 5.91% |
| Mar 17 16:21:34 | M50699483 4000000 0x2f3d4d841dd790ed | 2688K 0.28125 2.0296 101.48s | 1:02:19:17 7.88% |
| Mar 17 16:55:25 | M50699483 5000000 0x6762222e6dac7a3d | 2688K 0.28125 2.0299 101.49s | 1:01:45:49 9.86% |
| Mar 17 17:29:17 | M50699483 6000000 0x0c3e78b1c77fa688 | 2688K 0.26563 2.0317 101.58s | 1:01:12:17 11.83% |
| Mar 17 18:03:10 | M50699483 7000000 0x62afd692a5ec4eb2 | 2688K 0.28125 2.0326 101.63s | 1:00:38:43 13.80% |
| Mar 17 18:37:03 | M50699483 8000000 0x9f2ac42f44fa0d49 | 2688K 0.25000 2.0329 101.64s | 1:00:05:08 15.77% |
| Mar 17 19:10:57 | M50699483 9000000 0x2572440b76cf7b14 | 2688K 0.25000 2.0326 101.63s | 23:31:33 17.75% |
| Mar 17 19:44:50 | M50699483 10000000 0x5d942313da74c513 | 2688K 0.28125 2.0331 101.65s | 22:57:51 19.72% |
| Mar 17 20:18:44 | M50699483 11000000 0xaf441cb0956493cf | 2688K 0.25391 2.0338 101.69s | 22:24:08 21.69% |
| Mar 17 20:52:37 | M50699483 12000000 0xfd12342a64d97b8f | 2688K 0.25000 2.0335 101.67s | 21:50:23 23.66% |
| Mar 17 21:26:31 | M50699483 13000000 0x2e9753d14381b557 | 2688K 0.28125 2.0331 101.65s | 21:16:38 25.64% |
| Mar 17 22:00:25 | M50699483 14000000 0xde3c314364459c1b | 2688K 0.28125 2.0337 101.68s | 20:42:52 27.61% |
| Mar 17 22:34:13 | M50699483 15000000 0x2ee5c7cc8f97b0b9 | 2688K 0.25000 2.0264 101.32s | 20:08:50 29.58% |
| Mar 17 23:08:00 | M50699483 16000000 0x42acbd803c14864f | 2688K 0.26563 2.0265 101.32s | 19:34:48 31.55% |
| Mar 17 23:41:52 | M50699483 17000000 0x693f9e609d94f89e | 2688K 0.28125 2.0333 101.66s | 19:00:57 33.53% |
| Mar 18 00:15:45 | M50699483 18000000 0x91713add0ed97c33 | 2688K 0.25000 2.0318 101.59s | 18:27:08 35.50% |
| Mar 18 00:49:38 | M50699483 19000000 0x277a592c42facb53 | 2688K 0.28125 2.0330 101.65s | 17:53:19 37.47% |
| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 18 01:23:30 | M50699483 20000000 0xa5682a939ef38d9a | 2688K 0.25000 2.0328 101.64s | 17:19:29 39.44% |
| Mar 18 01:57:23 | M50699483 21000000 0x8a8b4492ffc5470b | 2688K 0.25391 2.0321 101.60s | 16:45:39 41.42% |
| Mar 18 02:31:15 | M50699483 22000000 0xd6d3f91689ddfaf1 | 2688K 0.25000 2.0314 101.57s | 16:11:48 43.39% |
| Mar 18 03:05:07 | M50699483 23000000 0xb48ac590fba75fe2 | 2688K 0.25000 2.0317 101.58s | 15:37:57 45.36% |
| Mar 18 03:38:59 | M50699483 24000000 0xad49e3830b9218d2 | 2688K 0.25342 2.0319 101.59s | 15:04:05 47.33% |
| Mar 18 04:12:51 | M50699483 25000000 0xe4382fb2661b845a | 2688K 0.28125 2.0279 101.39s | 14:30:14 49.31% |
| Mar 18 04:46:38 | M50699483 26000000 0x83c74046877bc1d7 | 2688K 0.25781 2.0262 101.31s | 13:56:17 51.28% |
| Mar 18 05:20:25 | M50699483 27000000 0xeb4330b282026832 | 2688K 0.26563 2.0264 101.32s | 13:22:22 53.25% |
| Mar 18 05:54:15 | M50699483 28000000 0x54a4f4caadaae0f9 | 2688K 0.26563 2.0313 101.56s | 12:48:30 55.22% |
| Mar 18 06:28:07 | M50699483 29000000 0x9259066017b75695 | 2688K 0.31250 2.0316 101.58s | 12:14:39 57.19% |
| Mar 18 07:01:59 | M50699483 30000000 0xfcee9793433d6108 | 2688K 0.25000 2.0309 101.54s | 11:40:47 59.17% |
| Mar 18 07:35:50 | M50699483 31000000 0x0082df9e0893d9d2 | 2688K 0.25000 2.0313 101.56s | 11:06:56 61.14% |
| Mar 18 08:09:39 | M50699483 32000000 0x9b2d2709d0339c17 | 2688K 0.28125 2.0253 101.26s | 10:33:03 63.11% |
| Mar 18 08:43:25 | M50699483 33000000 0x7a2717061965634c | 2688K 0.25000 2.0253 101.26s | 9:59:09 65.08% | Ernst's run matched to 33.13M and earlier
| Mar 18 09:17:11 | M50699483 34000000 0xa22077a84ffa1c25 | 2688K 0.26563 2.0254 101.27s | 9:25:15 67.06% | Ernst's run mismatches by 33.14M and onward
| Mar 18 09:50:57 | M50699483 35000000 0x3724fb6212e2582c | 2688K 0.28125 2.0253 101.26s | 8:51:22 69.03% |
| Mar 18 11:05:58 | M50699483 36000000 0x37b839cd056612f1 | 2688K 0.26563 2.0287 101.43s | 8:17:29 71.00% |
| Mar 18 11:39:47 | M50699483 37000000 0x0cbae6ba6e70a9cd | 2688K 0.26563 2.0283 101.41s | 7:43:38 72.97% |
| Mar 18 12:13:37 | M50699483 38000000 0x8c7564c5791e9f0c | 2688K 0.28125 2.0291 101.45s | 7:09:47 74.95% |
| Mar 18 12:47:27 | M50699483 39000000 0x2370560da2daf81d | 2688K 0.25000 2.0296 101.48s | 6:35:56 76.92% |
| Mar 18 13:21:17 | M50699483 40000000 0x30e2d45c4779c59d | 2688K 0.25781 2.0300 101.50s | 6:02:06 78.89% |
| Mar 18 13:55:08 | M50699483 41000000 0xa9554a0bd7a1a189 | 2688K 0.28125 2.0307 101.53s | 5:28:15 80.86% |
| Mar 18 14:29:00 | M50699483 42000000 0x63e3f73e9f80d7e7 | 2688K 0.26563 2.0310 101.55s | 4:54:25 82.84% |
| Mar 18 15:04:07 | M50699483 43000000 0x71c2bd796b14659d | 2688K 0.25000 2.0267 20.26s | 4:20:37 84.81% |
| Mar 18 15:40:30 | M50699483 44000000 0xb2471a858f2cf03c | 2688K 0.25000 2.0294 20.29s | 3:46:47 86.78% |
| Mar 18 16:14:19 | M50699483 45000000 0x962d7ea1b99c8cdd | 2688K 0.24219 2.0296 20.29s | 3:12:55 88.75% |
| Mar 18 16:48:08 | M50699483 46000000 0xb4f6cda6c171dd1d | 2688K 0.23438 2.0274 20.27s | 2:39:04 90.73% |
| Mar 18 17:21:56 | M50699483 47000000 0xb931cec9f7648e5e | 2688K 0.25000 2.0276 20.27s | 2:05:13 92.70% |
| Mar 18 17:55:45 | M50699483 48000000 0x97af7a930201bf89 | 2688K 0.26563 2.0279 20.27s | 1:31:22 94.67% |
| Mar 18 18:29:33 | M50699483 49000000 0xf839104523ba25f8 | 2688K 0.25000 2.0273 20.27s | 57:31 96.64% |
| Mar 18 19:03:21 | M50699483 50000000 0xc46cbefb9833ef5f | 2688K 0.28125 2.0272 20.27s | 23:40 98.62% |
M( 50699483 )C, 0x534e6375ec291d92, offset = 25360009, n = 2688K, CUDALucas v2.06beta[/CODE]final res64s from primenet, [URL]https://www.mersenne.org/report_exponent/?exp_lo=50699483&exp_hi=&full=1[/URL][CODE]
status date user res64 shift reliability
[COLOR=Green]Verified 2011-02-16 Stephan Grupp 534E6375EC291D92 43987345 matched[/COLOR]
[COLOR=DarkRed]Bad 2020-03-17 Ernst W. Mayer AC42E090D0368236 4377830 mismatched interim residues, see above
[/COLOR][COLOR=green]Verified 2020-03-19 Kriesel 534E6375EC291D92 25360009 matched
[/COLOR][/CODE]

ewmayer 2020-03-19 20:24

I've verified that my DC run went off the rails between iterations 33.13M and 33.14M:

Original DC run:
[code][2020-03-03 21:06:14] M50699483 Iter# = 33130000 Res64: DBEC805AFE89F02C. AvgMaxErr = 0.071080518. MaxErr = 0.109375000. shift = 26304104.
M50699483 Roundoff warning on iteration 33139667, maxerr = 0.500000000000
Retrying iteration interval to see if roundoff error is reproducible.
Restarting M50699483 at iteration = 33130000. Res64: DBEC805AFE89F02C, residue shift count = 26304104
M50699483: using FFT length 2816K = 2883584 8-byte floats, initial residue shift count = 26304104
this gives an average 17.582107197154652 bits per digit
Retry of iteration interval with fatal roundoff error was successful.
[2020-03-03 21:23:43] M50699483 Iter# = 33140000 Res64: 3FE1873AE0CFDBAD. AvgMaxErr = 0.071077051. [b]MaxErr = 0.160156250[/b]. shift = 10073663.
[2020-03-03 21:32:33] M50699483 Iter# = 33150000 Res64: 68D413A52601DAB7. AvgMaxErr = 0.071095947. MaxErr = 0.101562500. shift = 12690212.[/code]
Re-run:
[code][2020-03-18 22:17:32] M50699483 Iter# = 33130000 Res64: DBEC805AFE89F02C. AvgMaxErr = 0.071080518. MaxErr = 0.109375000. shift = 26304104.
[2020-03-18 22:19:46] M50699483 Iter# = 33140000 Res64: 067C546DA1F13507. AvgMaxErr = 0.071065967. [b]MaxErr = 0.109375000[/b]. shift = 10073663.
[2020-03-18 22:22:00] M50699483 Iter# = 33150000 Res64: 1CD84913571585E1. AvgMaxErr = 0.071029297. MaxErr = 0.109375000. shift = 12690212.[/code]
So a surmise at what happened with the first run:

1. Detect some kind of residue data corruption at iteration 33139667, manifesting itself with a sudden fatal (and far larger than the ROE range for this p as established by first part of run) maxerr = 0.5;

2. Restart for the iter-33.13M savefile - note the Res64 matches that of the re-run from 33M - and this time the 10Kiter interval completes successfully (no dangerous ROEs), but whatever glitchiness/system-needs-reboot-ness hosed the initial run of the interval is still happening, just this time it corrupted the data in a way that evaded the ROE - but the max ROE is still anomalously high, see my comparative bolding of the 33.14Miter max ROE above - we get max ROE ~0.16, it only reverts to the normal-for-this run of ~0.10 on the ensuing iteration interval. Trouble is, that is likely an unreliable guide - one more reason to use PRP/Gerbicz, especially on flaky hardware.

More thoughts on way GIMPS might better handle such erroneous runs - we know the server saving actual interim savefiles is not an option. But there are low-bandwidth ways to store interim refernce data for DCers to use to see if their runs are on track. The every-1M-iter cross-comparison is the model here:

1. All GIMPS clients agree on "iteration 0" convention, and that every 1M iterations (at least) codes will report in interim unshifted Res64;

2. When sending first-run results to the server, clients now also send said list of Res64 - that amounts to < 2kB data for current wavefront runs;

3. DCers obtain a copy of said data when they are assigned the exponent. Client programs would use those much like they currently use the Gerbicz check for PRP runs.

LaurV 2020-03-30 04:08

[QUOTE=Runtime Error;541247]My question: is this working as intended?[/QUOTE]
[QUOTE=phillipsjk;541257]Sounds like a bug. This may be vulnerable to a replay attack, since you will have access to the full residues.[/QUOTE]
It works as intended.

You are ok, right now I assume no action will be taken as long as you don't continue to do that. Sometimes we need triple and quadruple checks for "suspect" results (and there are ways to identify such, behind of what you see on the web), and it should be normal that people who do TC/QC tests get their credits.

On the other hand, if somebody tries to "inflate his credits" by repeatedly doing tricks like that, he will be very fast spotted by the wolves lurking here around (I mean human wolves, not bots :razz:) who have nothing to do all day but watching what other users do (this is said with no disrespect!).

In general, fast advancing in tops is immediately spotted by somebody, and the fast runner will be dissected not only with the scalpel, but mostly with a handsaw too, hehe. We are kind of a "tough community" here. In the good sense, of course. In the past, when such profiteers were found, George used to adjust their credits into the negatives, so whoever tried to take advantage of the system would have to work for some weeks to reach zero, and start fresh again. So, beware :smile:

Runtime Error 2020-03-30 16:36

[QUOTE=LaurV;541304]It works as intended. [/QUOTE]

Good to know, thanks!

kriesel 2020-03-31 15:29

[QUOTE=ATH;535506]PRP=EDDC25414116177C4F046D79BE11A463,1,2,96365519,-1,76,0,3,1

Added the 2 extra arguments that can be in the assignment: ",3,1"

[B][U]1,2,96365519,-1[/U][/B]: Number to test: 1 * 2[SUP]96365519[/SUP] - 1
[B][U]76[/U][/B]: Trial factored to 2[SUP]76[/SUP]
[B][U]0[/U][/B]: Not sure about this one. (Maybe if P-1 has been done or not? or how many PRP tests has already been done on the exponent?)
[B][U]3[/U][/B]: PRP base 3. This is always 3 as standard for normal GIMPS candidates.
[B][U]1[/U][/B]: PRP type 1. This can vary between 1-5, but mostly 1 or 4 for older gpuowl tests. Prime95 and newer gpuowl versions and Mlucas? default to type 1 (and Prime95 uses type 5 for PRP-CF tests on exponents with known factor(s)).

Both PRP base and PRP type has to be the same for the PRPDC test as the original PRP test.


PRP type from undoc.txt, the "(default is 5)" is only for PRP-CF tests, the type number is 1 on normal PRP tests.

[CODE]PRP supports 5 types of residues for compatibility with other PRP programs. If
a is the PRP base and N is the number being tested, then the residue types are:
1 = 64-bit residue of a^(N-1), a traditional Fermat PRP test used by most other programs
2 = 64-bit residue of a^((N-1)/2)
3 = 64-bit residue of a^(N+1), only available if b=2
4 = 64-bit residue of a^((N+1)/2), only available if b=2
5 = 64-bit residue of a^(N*known_factors-1), same as type 1 if there are no known factors
To control which residue type is generated, use this setting in prime.txt:
PRPResidueType=n (default is 5)
The residue type can also be set for PRP tests in worktodo.txt entries making
this option somewhat obsolete.[/CODE][/QUOTE]
And also for base >3, some versions of gpuowl, PRP res type 0.
Gpuowl supported PRP res type was 1 for some versions, 4 for others, 1 currently.

Worktodo formats for all common applications are described in [URL]https://www.mersenneforum.org/showpost.php?p=522098&postcount=22[/URL]

Uncwilly 2020-03-31 15:42

[QUOTE=kriesel;541404]And also for base >3, some versions of gpuowl, PRP res type 0.
Gpuowl supported PRP res type was 1 for some versions, 4 for others, 1 currently.

Worktodo formats for all common applications are described in [URL]https://www.mersenneforum.org/showpost.php?p=522098&postcount=22[/URL][/QUOTE]
This is not a commentary thread. If you want to add info to the quoted post, send me a pm with the specific changes or an new version of that post. Your post that I have quoted will be moved or wished away into the cornfield.

kriesel 2020-03-31 17:40

LL triple check needed
 
Doublecheck=126745771,80,1
Prime95 or mprime with Jacobi check, random offset is recommended for triple check.

[URL]https://www.mersenne.org/report_exponent/?exp_lo=126745771&full=1[/URL]

Second run was on my generally very reliable GTX1080. Selected interim residues for check against triple check run follow. The log file contains such lines at 50,000-iteration intervals. Offset for the run: offset = 63381765.
[CODE]| Date Time | Test Num Iter Residue | FFT Error ms/It Time | ETA Done |
| Mar 19 20:38:52 | M126745771 1000000 0x15edf96ca917718e | 6912K 0.25000 6.5481 327.40s | 9:12:49:49 0.78% |
| Mar 20 03:56:31 | M126745771 5000000 0xb051d166bd0d4b77 | 6912K 0.26367 6.5451 327.25s | 9:05:54:55 3.94% |
| Mar 20 13:03:25 | M126745771 10000000 0xc78694fe47a6c290 | 6912K 0.26563 6.6095 330.47s | 8:20:48:43 7.88% |
| Mar 21 07:19:05 | M126745771 20000000 0xe8b99fde161b0dc9 | 6912K 0.25627 6.5234 326.17s | 8:02:45:13 15.77% |
| Mar 22 01:36:06 | M126745771 30000000 0x8086b6a513e02681 | 6912K 0.26563 6.5931 329.65s | 7:08:36:49 23.66% |
| Mar 22 19:54:40 | M126745771 40000000 0xdb678d3e24b74ed8 | 6912K 0.25000 6.5796 328.98s | 6:14:28:29 31.55% |
| Mar 23 14:14:22 | M126745771 50000000 0x0487c8c933f3b472 | 6912K 0.26563 6.6038 330.19s | 5:20:17:48 39.44% |
| Mar 24 08:31:55 | M126745771 60000000 0x602d363d22b19738 | 6912K 0.26563 6.5840 329.20s | 5:02:01:43 47.33% |
| Mar 25 12:33:21 | M126745771 70000000 0xeb8cda315394d739 | 6912K 0.28125 6.5855 329.27s | 4:07:43:28 55.22% |
| Mar 26 06:46:58 | M126745771 80000000 0x15755ff86e6e458a | 6912K 0.26563 6.5379 326.89s | 3:13:24:54 63.11% |
| Mar 27 23:45:57 | M126745771 90000000 0x0754d138fd03932a | 6912K 0.25000 6.5502 327.51s | 2:19:07:16 71.00% |
| Mar 28 17:57:45 | M126745771 100000000 0x9adb1773be482891 | 6912K 0.25781 6.5468 327.34s | 2:00:50:10 78.89% |
| Mar 29 12:07:57 | M126745771 110000000 0xbcb72ec2eb415862 | 6912K 0.26563 6.5203 326.01s | 1:06:33:46 86.78% |
| Mar 30 06:20:06 | M126745771 120000000 0x78dfe475a7ee035a | 6912K 0.26563 6.5534 327.67s | 12:18:32 94.67% |
| Mar 30 15:25:57 | M126745771 125000000 0x128cef69ca9d362a | 6912K 0.26172 6.5461 327.30s | 3:11:06 98.62% |
| Mar 30 17:15:07 | M126745771 126000000 0x88825c2169f7f431 | 6912K 0.26563 6.5485 327.42s | 1:21:38 99.41% |
M( 126745771 )C, 0x86e0ae7fee07db__, 6912K, CUDALucas v2.06beta, estimated total time = 231:14:17[/CODE]max error logged 0.29688
No separate error messages appear in the log file for this primality test.

kriesel 2020-03-31 17:42

[QUOTE=Uncwilly;541405]This is not a commentary thread. If you want to add info to the quoted post, send me a pm with the specific changes or an new version of that post. Your post that I have quoted will be moved or wished away into the cornfield.[/QUOTE]Hmm. See also posts 6 thru 8 by others.

Uncwilly 2020-04-01 00:23

[QUOTE=kriesel;541413]Hmm. See also posts 6 thru 8 by others.[/QUOTE]
7-8 are in context to a current is and will be moved after a sufficient time. If you gander at the processed thread, you will see many such things over there. It is closed and so all of the posts were here.

6 is important for those that may claim PRP-DC's. I have thought about extracting content and moving it to post 4.

Runtime Error 2020-04-01 00:54

Hi, I can take this one. Running it with manual testing in mprime on linux, but I've registered it with Prime95 on windows.

[QUOTE]Doublecheck=126745771,80,1[/QUOTE]

Edit: it says "mPrime95 please"... thought it was either mprime in linux or Prime95 on windows. Let me know if I should stop the test. Should be done in ~4 days

Uncwilly 2020-04-01 03:12

[QUOTE=Runtime Error;541443]Edit: it says "mPrime95 please"... thought it was either mprime in linux or Prime95 on windows. Let me know if I should stop the test. Should be done in ~4 days[/QUOTE]Just my either/or shorthand. Your good. Note Ken's list above. You can see how your run is matching.

Runtime Error 2020-04-02 16:06

[QUOTE=Uncwilly;541449]Note Ken's list above. You can see how your run is matching.[/QUOTE]

Noob question: How do I see these residuals? I'm running headless. Is there an option in mprime to write interim residuals to the results.txt or results.json.txt file?

Follow up: When manual testing, can we submit interim residuals to the server on the manual result turn in page somehow?

Thanks!

ATH 2020-04-02 16:48

Use this in prime.txt to display residues in results.txt every 1M iterations, or how often you want:

InterimResidues=1000000

Runtime Error 2020-04-03 02:03

Thank you, ATH. Residuals @ 40m are:

[QUOTE]M126745771 interim LL residue 5FAC61D44C636740 at iteration 40000000
M126745771 interim LL residue F41348BCAF00B2D7 at iteration 40000001
M126745771 interim LL residue DB678D3E24B74ED8 at iteration 40000002
[/QUOTE]

Which unfortunately does not match Kriesel's, unless it needs to be adjusted by offset somehow?

[QUOTE]M126745771 40000000 0xdb678d3e24b74ed8[/QUOTE]

I also reran the first million iters for this:

[QUOTE]M126745771 interim LL residue C92BE3EE0729DDBC at iteration 1000000
M126745771 interim LL residue 70230187AD96E7C9 at iteration 1000001
M126745771 interim LL residue 15EDF96CA917718E at iteration 1000002
[/QUOTE]

Which does not match Kriesel's either:

[QUOTE]M126745771 1000000 0x15edf96ca917718e[/QUOTE]

Prime95 2020-04-03 02:34

[QUOTE=Runtime Error;541650]
Which unfortunately does not match Kriesel's, unless it needs to be adjusted by offset somehow?:[/QUOTE]

you match. prim95 is off by 2 iterations -- my bad

Runtime Error 2020-04-03 02:37

[QUOTE=Prime95;541652]you match. prim95 is off by 2 iterations -- my bad[/QUOTE]

Oh I see now, thank you!

kriesel 2020-04-03 15:17

[QUOTE=Prime95;541652]you match. prim95 is off by 2 iterations -- my bad[/QUOTE]
Perhaps for v30, mprime and prime95 could be changed to compatible iteration numbering, and single-iteration-output rather than 3-successive, to reduce future confusion when comparing to the various other programs.

kriesel 2020-04-04 17:14

LL DC mismatch, LL TC requested:
Doublecheck=[M]53459173[/M],74,1

Selected CUDALucas v2.06 interim residues, from my Quadro 4000 run (others available at 20K granularity on request)[CODE]
| Mar 20 19:51:09 | M53459173 1000000 0xc30ddf3b1d43b694 | 2880K 0.23438 22.8849 457.69s | 13:21:32:24 1.87% |
| Mar 21 21:17:16 | M53459173 5000000 0xd7003fede7eb60e4 | 2880K 0.21820 22.8842 457.68s | 12:20:06:49 9.35% |
| Mar 23 05:04:44 | M53459173 10000000 0xf929defc4616878f | 2880K 0.25000 22.8848 457.69s | 11:12:19:21 18.70% |
| Mar 25 20:39:39 | M53459173 20000000 0x4368da2c97ee6947 | 2880K 0.23438 22.8851 457.70s | 8:20:44:26 37.41% |
| Mar 28 12:14:44 | M53459173 30000000 0xdd3697380fa32392 | 2880K 0.22656 22.8840 457.68s | 6:05:09:30 56.11% |
| Mar 31 03:49:39 | M53459173 40000000 0x38b622ef9b3de51b | 2880K 0.23438 22.8841 457.68s | 3:13:34:34 74.82% |
| Apr 02 19:24:34 | M53459173 50000000 0xa7c41bca785d290e | 2880K 0.22656 22.8854 457.70s | 21:59:38 93.52% |
| Apr 03 14:29:03 | M53459173 53000000 0x73da601da8899ce7 | 2880K 0.23047 22.8857 457.71s | 2:55:10 99.14% |
[/CODE]

Runtime Error 2020-04-04 17:26

[QUOTE=kriesel;541775]LL DC mismatch, LL TC requested:
Doublecheck=[M]53459173[/M],74,1[/QUOTE]

Tried to take this one (for manual testing) and register it with Prime95 first, but it says "ra: exponent 53459173 violates assignment rules". Probably because I tried to register in a new instance and it is Cat1 or lower? Frown. Someone else will get it!

Also Krisel, we match with EB8CDA315394D739 @70m on that other one. Unfortunately it is on a slower machine so it'll be a couple more days but so far your test seems good.

Follow up question: I'm now outputting residuals on a 100m digit exponent that I'm PRPing with manual testing. Can primenet currently take these interims from manual testing the same way it automatically takes every 10m iteration residual from Prime95? (I tried submitting it via the txt upload page but it says "Did not understand N lines.") Thanks!

kriesel 2020-04-04 19:37

[QUOTE=Runtime Error;541778]Also Kriesel, we match with EB8CDA315394D739 @70m on that other one. Unfortunately it is on a slower machine so it'll be a couple more days but so far your test seems good.

Follow up question: I'm now outputting residuals on a 100m digit exponent that I'm PRPing with manual testing. Can primenet currently take these interims from manual testing the same way it automatically takes every 10m iteration residual from Prime95? (I tried submitting it via the txt upload page but it says "Did not understand N lines.") Thanks![/QUOTE]The 70M match is encouraging. Thanks for the update.

There is no provision for manual submission of progress reporting or interim residues. I've asked that it be added to the mersenne.org web server's manual submission capability, but it has not happened. Doing it for even just one or two applications would be very helpful (CUDALucas or Gpuowl). [URL]https://www.mersenneforum.org/showpost.php?p=517213&postcount=15[/URL]

Runtime Error 2020-04-08 02:14

[QUOTE=Uncwilly;542065]How did you try to reserve them? And were they your first entries during the reserving process?[/QUOTE]

Thanks for the quick reply. Yes, they were the first, and I tried a subset of them a couple times. My process:

1. Download and unzip a fresh Prime95 instance for windows
2. Create a new worktodo.txt file, and copy/paste the list of exponents from the Triple Check list, and check-in with PrimeNet
3. Check my account on mersenne.org to see which got the assignment IDs
4. Usually only a few get registered, so I check the last successfully registered one and retry steps 1 thru 3 with the remaining exponents that I want
4. Copy/paste the successful reservations into my Manual Testing client to run

Uncwilly 2020-04-08 21:36

[QUOTE=Runtime Error;542067]Thanks for the quick reply. Yes, they were the first, and I tried a subset of them a couple times. My process:[/QUOTE]I just tried one. I stopped Prime95, tossed the line into my worktodo, restarted P95, forced coms with dates, it was not given an AID. This is on a machine that is banging out matching DC Cat 0 and Cat 1 exponents all the time.

This is a job for George, Aaron, or James maybe.

Runtime Error 2020-04-09 18:15

[QUOTE=LaurV;542187]:shock: Now, that's odd, indeed ...[/QUOTE]

I'm newer to the forums, so no offense taken. My [unsubstantiated] conjecture is that it might have something to do with how expiring exponents were extended to due COVID-19. The only difference I noticed between these troublesome ones and those that I successfully registered was that these have expired P-1's, albeit from 2017. Thanks!

kriesel 2020-04-09 21:36

[QUOTE=LaurV;542179]Unrelated.

He specifically said what program he uses, and getting or not getting an AID is [U]server[/U] related. Entries [U]with[/U] N/A should [U]not[/U] get and AID - that was why N/A was introduced. But many (new) don't remember the ancient history.[/QUOTE]What form a given program requires or allows is related to what a user ought input to it, if doing the input themselves, as uncwilly did. And a reference for that is a good thing for people to be aware of, including new arrivals.

Uncwilly 2020-04-10 17:04

[QUOTE=ewmayer;542226]This PRP-3 ran using Mlucas on my Haswell quad, threw one Gerbicz-check error followed by successful 1M-iter interval retry along the way, might be nice to do an early DC by way of data gathering re. the reliability of such runs:[/QUOTE]
[CODE][Apr 10 09:36] Gerbicz error check passed at iteration 8000000.
[Apr 10 09:36] M96364649 interim PRP residue D9CAD187355A206C at iteration 8000000
[/CODE]

LaurV 2020-04-17 01:42

[QUOTE=ric;542849]A small quad-check, if anyone's interested -> [URL="https://www.mersenne.org/M50978381"]50978381[/URL][/QUOTE]
[STRIKE]Me! (just testing gpuOwl). Will be done before the weekend.[/STRIKE]
Edit: Grrr.. sorry already reserved by [SPOILER]A hardhearted rotten evil nameless terror[/SPOILER], 1% progress. I stay around the corner with the gun out, in case he also mismatches and a fifth is needed (low chance).
Edit 2: you [URL="https://www.mersenneforum.org/showpost.php?p=542854&postcount=11"]also posted this[/URL] in "strategic DC" thread! Something I don't know? (like we decided to post requests here, and reservations there, to avoid mixing them?)

Uncwilly 2020-04-17 02:00

[QUOTE=LaurV;542926]Edit 2: you [URL="https://www.mersenneforum.org/showpost.php?p=542854&postcount=11"]also posted this[/URL] in "strategic DC" thread! Something I don't know? (like we decided to post requests here, and reservations there, to avoid mixing them?)[/QUOTE]Requests and claims go in that thread. Once the request has been claimed, it gets moved here. Threads with claims also get moved here at some point. Except for the first few posts in that thread, all post get moved here. This is where the -processed- posts go. The claim was made after the AID was handed out by PrimeNet. Therefore the rightful owner of the exponent is currently working on it.

If you want, an arm wrestling match on a table filled with shards of broken glass and rusty razor blades will be held in Retina's evil lair on the 17th of April at 0100 UTC.

LaurV 2020-04-17 03:39

[QUOTE=Uncwilly;542927]If you want, an arm wrestling match on a table filled [/QUOTE]
Scorpions? If not, I am not interested.

Uncwilly 2020-04-27 20:23

[QUOTE=ewmayer;543910][code]M89575217 Iter# = 24000000 Res64: F1A41B4449CE4B5B. Shift = 68304064
M89575217 Iter# = 25000000 Res64: 3A58B1D81EFD9D48. Shift = 72661316

M89575217 Iter# = 31000000 Res64: C8B0DA773668D354. Shift = 10783625
M89575217 Iter# = 34000000 Res64: BF62849ECCCBA9AC. Shift = 64655061
M89575217 Iter# = 48000000 Res64: 5FDEC01CEF0EA71B. Shift = 40545779[/code][/QUOTE]
Since laying hands on the machine in question, those have matched the X+2 iterations from P95.
Next update in ~40 hours.

ewmayer 2020-04-27 20:31

[QUOTE=Runtime Error;543982]Oh, you are right. Anyway, we mismatch between 71M and 72M.[/QUOTE]

Thanks - my run had data-borkage glitches (sudden ROE near 0.5, triggering retry of 10000-iter interval) at iters 71070135 and 71652726, in both cases the rerun of the 10000-iter interval in question was successful, in the sense that no fatal ROEs were encountered. Such glitches are par for the course for my Haswell CPU, the above run had a total of 104(!) of them. The problem as is now clear is that each such occurrence contains a ~1%-level chance that interval retry also suffers from data corruption, just not of a kind which happens to show up via a sudden-onset fatal ROE. That's the kind of 'silent data corruption' the PRP/Gerbicz-check combo solves. As I've noted, now that [a] I'm aware of how prone this CPU is to such errors, and [b] have switched it to PRP-only as a result, I'm finding said CPU quite valuable as a means of testing the foolproofness of PRP/Gerbicz-check run mode. Still see similar level of within-run bad-data glitches but now the ones of the kind which proved fatal to the above LL run get caught at the next Gerbicz-check.

@Uncwilly: my LL test of 89575217 on the same system had 65 of the above-described glitches, so enjoy the every-M-iter matches while they last, because I don't think they're gonna continue through end of the run. :)

Uncwilly 2020-04-27 20:41

[STRIKE]Alex,[/STRIKE] oops [COLOR="DarkRed"][FONT="Arial Black"]Andreas
[/FONT][/COLOR]
Can you data mine some more TC's and early DC's? Or just let us know that there are no more.

ewmayer 2020-04-28 19:15

[QUOTE=ATH;544076]Your result is correct: [url]https://mersenne.org/M90110269[/url][/QUOTE]

Thanks, Andreas - now I really wish I'd kept a list of LL-tests done on the Haswell system, but all those data got blown away when I clean-installed Ubuntu 19.10 a few months ago. Ah, well - they'll get DCed in due course.

ATH 2020-04-28 20:58

[QUOTE=Uncwilly;544001][STRIKE]Alex,[/STRIKE] oops [COLOR="DarkRed"][FONT="Arial Black"]Andreas
[/FONT][/COLOR]
Can you data mine some more TC's and early DC's? Or just let us know that there are no more.[/QUOTE]

Me? I'm not the one usually doing this. I have a list with lots of those exponents with 1 Suspect and 1 Unverified LL test if you want those. I can probably find those with 2+ Unverified (but not right now headed to bed, maybe tomorrow).
Early DC's is harder, that is easier for Aaron to find with his SQL searches directly in the database and his list of users with faulty computers.

Edit: Added the list 84M-90M to the DC list. It is a 2 day old search, so some might have been taken, but no time to check now sorry.

ATH 2020-04-28 21:12

[QUOTE=ewmayer;544106]Thanks, Andreas - now I really wish I'd kept a list of LL-tests done on the Haswell system, but all those data got blown away when I clean-installed Ubuntu 19.10 a few months ago. Ah, well - they'll get DCed in due course.[/QUOTE]

Did you use other computers at the same time as the Haswell? Otherwise you can find them in this list of your unverified exponents based on the date maybe?

[url]https://www.mersenne.org/report_ll/?exp_lo=2&exp_hi=150000000&exp_date=&end_date=&user_only=1&user_id=Ernst+W.+Mayer&txt=1&exdchk=1&dispdate=1&exbad=1&exfactor=1&B1=[/url]

ewmayer 2020-04-29 22:43

[QUOTE=ATH;544123]Did you use other computers at the same time as the Haswell? Otherwise you can find them in this list of your unverified exponents based on the date maybe?[/QUOTE]

Something like that did occur to me, but you saved me the work of digging out the list of submitted-in-last-few years, thanks.

So here the machines-used summary for that timespan:

- Prior to March 2019 [when my cellphones-using-ARM-build went online], only significant contributions were from the Haswell and my Intel Broadwell NUC. The latter has never had any stability issues and still has logfiles for all GIMPS work it's ever done stored, so I've removed its first-time-test entries from the list you posted, leaving 35 entries. A few of those were likely Cellphone runs on phones which have since died, again those seem quite reliable based on them having been first tasked with several round of DCs and assigned first-time tests only if those passed. But the number of the latter will be very small. So nearly all of the 35 will have been done on the Haswell - I annotate the first few, which oddly remain assigned despite multiyear last-updated times - that may be a server glitch. The rest area a mix of unverifieds and DC-started-but-abandoned. Please check current status before grabbing - there may be a few which are currently assigned and not in the "for how many years??" sense:
[code]Exponent ewm submitted on ewm result shift current-DC-assignee/last-update
[COLOR="YellowGreen"][STRIKE][M]74974717[/M] 7/23/2015 3:00 B5D9219DDA5F1E__ 0 Anonymous, 2015-05-30 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M74974961"]M[/URL][COLOR="YellowGreen"][STRIKE][M]74974961[/M] 7/23/2015 3:00 F659D72393B68D__ 0 Anonymous, 2015-05-30 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M74974961"]M[/URL][COLOR="yellowgreen"][STRIKE][M]76095269[/M] 2/22/2016 8:44 46851FC5879C8A__ 0 bvoltair, 2016-05-06 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M76095727"]M[/URL][COLOR="YellowGreen"][STRIKE][M]76095727[/M] 2/22/2016 8:44 A92EC4A2208AF4__ 0 ANONYMOUS, 2016-01-20 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M80862581"]M[/URL][COLOR="yellowgreen"][STRIKE][M]80862581[/M] 12/16/2017 23:59 C50895D8698720__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M80862739"]M[/URL][COLOR="DarkRed"][STRIKE][M]80862739[/M] 12/16/2017 23:59 4A74D6080CC9C0__ 0 Runtime Error 5/3/2020[/STRIKE][/COLOR]
[URL="https://mersenne.org/M81356593"]M[/URL][COLOR="yellowgreen"][STRIKE][M]81356593[/M] 1/11/2018 0:44 4CED8E610C5680__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M81356621"]M[/URL][COLOR="YellowGreen"][STRIKE][M]81356621[/M] 1/23/2018 23:56 C9EE8ABD52DBD1__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M81489533"]M[/URL][COLOR="yellowgreen"][STRIKE][M]81489533[/M] 1/23/2018 23:56 53CAC649260C99__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M81489643"]M[/URL][COLOR="yellowgreen"][STRIKE][M]81489643[/M] 2/19/2018 1:33 EE9AA266BAF2C5__ 0 Runtime Error 5/3/2020[/STRIKE][/COLOR]
[URL="https://mersenne.org/M82261939"]M[/URL][COLOR="YellowGreen"][STRIKE][M]82261939[/M] 2/25/2018 1:16 70421560D74445__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M82261957"]M[/URL][COLOR="yellowgreen"][STRIKE][M]82261957[/M] 3/8/2018 6:53 399BE4206CB1B7__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M82701683"]M[/URL][COLOR="yellowgreen"][STRIKE][M]82701683[/M] 3/26/2018 2:16 1D3151F372C59F__ 0 Runtime Error 5/3/2020[/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836229"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836229[/M] 1/4/2019 4:28 E208B13089B37F__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836271"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836271[/M] 1/4/2019 4:28 94648F6B800C30__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836449"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836449[/M] 1/4/2019 4:28 945EF467CCFE2B__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836847"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836847[/M] 1/4/2019 4:28 A3A6EF6EC9E6CA__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836869"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836869[/M] 1/4/2019 4:28 0710B4E14DC4CF__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836871"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836871[/M] 1/4/2019 4:29 B70A1A8D056298__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M85836953"]M[/URL][COLOR="yellowgreen"][STRIKE][M]85836953[/M] 1/4/2019 4:29 544DCA37645C14__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86313749"]M[/URL][COLOR="yellowgreen"][STRIKE][M]86313749[/M] 12/27/2018 6:58 C01302C6444441__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86313833"]M[/URL][COLOR="YellowGreen"][STRIKE][M]86313833[/M] 1/7/2019 7:42 236AB1B7F27908__ 0[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86660489"]M[/URL][COLOR="YellowGreen"][STRIKE][M]86660489[/M] 7/8/2019 3:06 60549BE75AB34B__ 0 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M86660533"]M[/URL][COLOR="yellowgreen"][STRIKE][M]86660533[/M] 7/28/2019 5:07 5834D82AEB7A2E__ 4392243 Runtime Error 5/3/2020[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86660569"]M[/URL][COLOR="yellowgreen"][STRIKE][M]86660569[/M] 8/30/2019 5:35 83BC88FF8E5542__ 45526406[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86687009"]M[/URL][COLOR="yellowgreen"][STRIKE][M]86687009[/M] 4/2/2019 19:01 F758FD5F7F7DB3__ 9961107 [/STRIKE][/COLOR]
[COLOR="DarkRed"][STRIKE][M]86687099[/M] 3/10/2019 21:38 E4F9565BBA645C__ 58101078 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M86695813"]M[/URL][COLOR="YellowGreen"][STRIKE][M]86695813[/M] 6/19/2019 5:59 840C9A84963608__ 17568972[/STRIKE][/COLOR]
[URL="https://mersenne.org/M86749163"]M[/URL][COLOR="yellowgreen"][STRIKE][M]86749163[/M] 8/15/2019 18:32 7CBDF51C9A915D__ 55111967 [/STRIKE][/COLOR]
[URL="https://mersenne.org/M89493361"]M[/URL][COLOR="yellowgreen"][STRIKE][M]89493361[/M] 1/20/2020 2:34 DE1386EF6213A5__ 48305317[/STRIKE][/COLOR]
[URL="https://mersenne.org/M89522501"]M[/URL][COLOR="YellowGreen"][STRIKE][M]89522501[/M] 12/23/2019 19:17 28FB263CDCAB24__ 13206906[/STRIKE][/COLOR]
[URL="https://mersenne.org/M89575217"]M[/URL][COLOR="DarkRed"][STRIKE][M]89575217[/M] 2/10/2020 20:56 873C9D0059FD91__ 6603453 Mismatching DC by Will Edgington on 2020-02-21[/STRIKE][/COLOR]
[URL="https://mersenne.org/M89575379"]M[/URL][COLOR="yellowgreen"][STRIKE][M]89575379[/M] 1/2/2020 20:18 E06B3C3EEADACD__ 30089138[/STRIKE][/COLOR]
[URL="https://mersenne.org/M89604341"]M[/URL][COLOR="DarkRed"][STRIKE][M]89604341[/M] 11/1/2019 23:08 FA91DCBF5B6477__ 26413812[/STRIKE][/COLOR]
[URL="https://mersenne.org/M89614489"]M[/URL][COLOR="YellowGreen"][STRIKE][M]89614489[/M] 12/26/2019 2:00 B80DC2EE52E628__ 31334545 Runtime Error 5/3/2020[/STRIKE][/COLOR]
[/code]
If you reserve one or more of the ones < 86660500 which were run without residue-shift, make sure you DC using a program that uses nonzero shift.

Uncwilly 2020-05-01 13:43

[QUOTE=ewmayer;543910][code]
M89575217 Iter# = 50000000 Res64: 478612DCC5[COLOR="SeaGreen"]C94EF7[/COLOR]. Shift = 20101834
M89575217 Iter# = 51000000 Res64: 637F763A3D[COLOR="Red"]E41368[/COLOR]. Shift = 62697453[/code][/QUOTE]

[CODE][Apr 30 17:43] M89575217 interim LL residue D8074783E308ED1A at iteration 50000000
[Apr 30 17:43] M89575217 interim LL residue FB9B8980F2DCB4BB at iteration 50000001
[Apr 30 17:43] M89575217 interim LL residue 478612DCC5[COLOR="seagreen"]C94EF7[/COLOR] at iteration 50000002
[Apr 30 20:37] M89575217 interim LL residue 5BA4FA55C3E67DF3 at iteration 51000000
[Apr 30 20:37] M89575217 interim LL residue 3E3ADCDFAF860EAA at iteration 51000001
[Apr 30 20:37] M89575217 interim LL residue DDB2F72CDA[COLOR="red"]7139C7[/COLOR] at iteration 51000002[/CODE]

ewmayer 2020-05-01 19:01

@Unwilly - OK, we diverge between 50M and 51M - do you have interim Res64s for more-granular subintervals than 1M?

My run shows a detected-via-fatal-ROE bad-data episode between 50.65 and 50.66M, here are the bracketing residues:

50.65M: 747A1521BD2EA0B3
50.66M: A176DBC167748D3D

Uncwilly 2020-05-01 19:17

[QUOTE=ewmayer;544393]@Unwilly - OK, we diverge between 50M and 51M - do you have interim Res64s for more-granular subintervals than 1M?[/QUOTE]No.

ewmayer 2020-05-01 23:05

[QUOTE=Uncwilly;544396]No.[/QUOTE]

Mlucas writes persistent savefiles every 10M iter, so as the divergence occurred conveniently close to (but greater than) 50M, I just restarted my run from the 50M savefile and confirmed that the original run and rerun-from-50M indeed agreed at 50.65M and had diverged at the next checkpoint, 50.66M. The rerun also suffered from no fewer than 2 bad-data glitches, between 50.51-50.52M, and 50.64-50.65M, just neither proved fatal. God, my Haswell CPU is such a flaky beast - the ideal kind of target for PRP/Gerbicz-check tests. I see both of my current PRP runs ~96M, one currently @31M and the other @86M, have suffered one G-check failure-and-interval-retry incident to date.

Runtime Error 2020-05-05 01:27

[QUOTE=ewmayer;544527]Oh, re. your "reserving exponents is hard" post-annotation[/QUOTE]

Oops, I didn't mean that as a critique of the current systems in place. I only meant it as a tongue-in-cheek reflection of my own apparent inability to actually use those systems. I apologize if I offended anyone. But I do like your idea!

Anyway, the lowest exponent that I'm running ([M]76095727[/M]) finished and we match! The rest should finish tomorrow.

I'm running these on mprime. They do a random shift right?, so we'll probably be ok. Thanks for listing the offsets.

Uncwilly 2020-05-05 03:45

[QUOTE=Runtime Error;544610]Anyway, the lowest exponent that I'm running ([M]76095727[/M]) finished and we match! The rest should finish tomorrow.

I'm running these on mprime. They do a random shift right?, so we'll probably be ok. Thanks for listing the offsets.[/QUOTE]PrimeNet shows your offset on that exponent.

Runtime Error 2020-05-05 04:47

[QUOTE=Uncwilly;544615]PrimeNet shows your offset on that exponent.[/QUOTE]

I've been reserving them with either a bogus Prime95 instance or simply "Manual Testing" but then running mprime without assignment IDs (e.g. the line "DoubleCheck=[M]76095727[/M],75,1"). Is that a mistake in the sense that there's a 1 in whatever chance that we could be running the same offset? Thank you.

Uncwilly 2020-05-05 13:37

[QUOTE=Runtime Error;544618] Is that a mistake in the sense that there's a 1 in whatever chance that we could be running the same offset? Thank you.[/QUOTE]Ernst's were largely offset 0, so no chance on those. The others: what is 5,000,000 X 5,000,000 ?

ewmayer 2020-05-05 22:06

[QUOTE=Runtime Error;544644]@ewmayer: we match on all but [M]80862739[/M]. Unfortunately, I did not have the instance configured to output interim residuals. I took the one turned in before and after it from your list.[/QUOTE]

Thanks - so perhaps the Haswell is not as bad as I'd feared, but we shall see as more expos on the list get early-DCed. So 80862739 needs a TC - I don't have my old logfile for that one anyway, so no worries re. interim Res64s.

ATH 2020-05-06 18:56

[QUOTE=kruoli;544744]I took those. I tried to register them on PrimeNet, but that did not work even though none of them are reported of being currently assigned.[/QUOTE]

It is probably because they are Category 2 double checks:
[url]https://www.mersenne.org/thresholds/[/url]

I think Cat 0, 1 and 2 cannot be assigned with manual reservation, I'm not sure about Cat 3. You can add them to worktodo.txt in Prime95 or mprime and you will get them if that computer account meets the requirements for Cat 2.

Uncwilly 2020-05-06 19:20

[QUOTE=ATH;544748] You can add them to worktodo.txt in Prime95 or mprime and you will get them if that computer account meets the requirements for Cat 2.[/QUOTE]
Stop Prime95, add them to your worktodo. Restart Prime95, then do a manual communication with "Send completion dates" ticked.

kruoli 2020-05-06 19:31

[QUOTE=Uncwilly;544750]Stop Prime95, add them to your worktodo. Restart Prime95, then do a manual communication with "Send completion dates" ticked.[/QUOTE]

That's what I've tried (again after this post). It only says
[CODE]PrimeNet error 40: No assignment
ra: exponent 53697661 violates assignment rules.[/CODE]
for each of those exponents. After this, all exponents are [FONT="Courier New"]N/A[/FONT]. I also doublechecked the mersenne.org website, and there are definitely no assignments.

Maybe it is what ATH says, regarding the categories.

S485122 2020-05-06 22:47

[QUOTE=ATH;544748]It is probably because they are Category 2 double checks:
[url]https://www.mersenne.org/thresholds/[/url]

I think Cat 0, 1 and 2 cannot be assigned with manual reservation, I'm not sure about Cat 3. You can add them to worktodo.txt in Prime95 or mprime and you will get them if that computer account meets the requirements for Cat 2.[/QUOTE]Category 2 exponents can be assigned fro manual testing IF one is logged in and one has indicated the wish to get the smallest exponents available. See [url=https://www.mersenne.org/thresholds/]the Assignment Rules page at https://www.mersenne.org/thresholds/[/url].

Jacob


All times are UTC. The time now is 02:27.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.