mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   (Preying for) World Record P-1! (https://www.mersenneforum.org/showthread.php?t=27406)

lisanderke 2021-12-15 12:01

(Preying for) World Record P-1!
 
I'd like to organize a thread to find World Record Factors with P-1 on small exponents. I'll now refer to this project as WR P-1 (at the risk of it sounding too pretentious, perhaps :lol:) That would make this a sub-sub-project of GIMPS! Similar to Petrw1's <2k (sub twok, <twok...) project, although that name is much catchier than WR P-1 in my opinion :razz:

All of these small exponents will need to be P-1'ed to very high bounds. For now, it's best to only run stage 1 on these exponents (see quoted post below). However, this is where I'm stuck. The maths behind P-1 escapes me. Let alone the conversion from previous ECM-GMP work done to what that could equate to in terms of P-1 bounds ("previously done").

[QUOTE=Prime95;595226]There will not be lots of easy to find factors. GMP-ECM has been run on these expos - likely to pretty big bounds.

That's not to say it wouldn't be a good project to redo P-1 on all small exponents.

Suggestion: Remember that prime95 is not multi-threaded for small exponents. So, set up a worktodo.txt with work for every core. Just run stage 1, go deep. You're accumulating stage 1 results for a stage 2 run at a later date.

There are more advances coming for stage 2 P-1 on small expos (better multi-threading). When that version is ready, switch prime95 to one worker using all cores. Feed it all those stage 1 results you've accumulated.

BTW, a catchy project name (like petrw1's 20M project) and coordinating thread might be a good idea![/QUOTE]

With all of that out of the way, I need help devising up proper bounds for all of these ranges, exponents... As I've said before, the math escapes me, but hopefully I can learn from this project!


What I can do for now, however, is draft up some way of organizing this effort. Here goes nothing: I suggest we limit our effort to the 1M range for the foreseeable future. Subsequently, our current efforts should be focused on the 100k range. (Once this range has been completed, we take the next 100k range, and so on...)
Within this 100k range, I'd like it to be possible to 'reserve'/claim certain ranges, with the smallest range being 1k.


[B]I'd like to reserve the 5k range (all exponents from 5000 to 6000)[/B]
I'll attempt to take this range to B1=2 trillion (2,000,000,000,000 (as suggested by Zhangrc)




For a bit of backstory, I realized that lots of very small exponents that have already been factored, but are not co-factor PRP, (starting from exponent M[M]4013[/M]) never had any P-1 done to date. This prompted me to find more of these exponents that never had any P-1 done previously. Upon my request, James Heinrich added a checkbox to [URL]https://www.mersenne.ca/morefactors.php[/URL] to find these exponents that have had no P-1 done before. Though later it was pointed out that these exponents have had extensive GMP-ECM done, and any P-1 done would have to be to very high bounds to have a crack at finding more factors. Credit to Zhangrc for coming up with the name for this project!

[QUOTE=Zhangrc;595237]Maybe … The title should be "Fighting for World Record P-1." That is really going to find world record size factors.
Maybe something like this:
[URL="https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17"]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/URL][/QUOTE]

lisanderke 2021-12-15 12:03

Post reserved for factors found within this projects active ranges.

lisanderke 2021-12-15 12:08

Taking George Woltman's advice, let's wait with starting stage 2 on these exponents. For now, I'd like to open up discussion on what B1 would be sufficient to take these ranges up to. Zhangrc suggested the following bound:

[url]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/url]

Zhangrc 2021-12-15 14:13

[QUOTE=lisanderke;595273]Taking George Woltman's advice, let's wait with starting stage 2 on these exponents. For now, I'd like to open up discussion on what B1 would be sufficient to take these ranges up to. Zhangrc suggested the following bound:

[url]https://www.mersenne.ca/prob.php?exponent=9007&factorbits=95&b1=2000000000000&b2=1.0E%2B17[/url][/QUOTE]

For 5000 to 6000, you can use even larger bounds, say 3e12 to 4e12 (as long as you can bear with the run time)
Then you assume no factor below 2^95 and let Prime95 automatically decide the B2 bounds for you.

Prime95 2021-12-15 14:32

I don't know if there is a "correct" B1 value. It may simply depend on how much patience you have.

FWIW, my aging quad core can compute 4 exponents around 80K to B1=300M in just under 2 hours. Extrapolating I should be able to take four 5K exponents to B1=1T in about 17 days. A pretty significant effort considering the number of exponents to work on.

I'll reserve the 79000 to 82000 area for P-1. I've been doing B1=300M and annoyingly this range has not given me a new factor yet. I think I'll try B1=3 to 5 billion.

@James: Can the P-1 probability calculator be changed to allow more than 95 bits of TF? Or even better, estimate the proper TF value given the amount of ECM that's been done?

If anyone has old P-1 files to donate that can be used as a starting point please let us know. We could set up a repository for future P-1 efforts.

axn 2021-12-15 14:39

Try to keep the following in mind while doing large B1. Although, 2e12 seems so large that it would barely make any difference, I guess.

[QUOTE=undoc.txt]If doing P-1 with a very large B1, the program uses about 47MB of memory to
pre-calculate the product of all small primes below 250 million. The primes above
250 million are processed using a somewhat slower method. You can elect to use
more memory to move the 250 million threshold higher. In prime.txt enter:
MaxStage0Prime=n (n is in millions, default is 250)[/QUOTE]

lisanderke 2021-12-15 16:15

First off: I'm not able to keep editing my posts (mainly the first and second one in this thread), but given the nature of this thread, I'd like to request the ability to do so. It might be much easier to keep track of reservations if it's all being done in one post. Should I contact someone specifically to request this permission if it's a possibility? If not, I could set up a google sheet to keep track of reservations.



I've spread out my worktodo with exponents on each core (6 cores), B1=2T and it seems like this would take 28 days per exponent (with B1 only). There are a little more than 100 exponents (with factors found) and only three exponents with no factors found in the 5k range... B1=200B takes it down to three days and change, B1=20B takes it down to seventeen hours, B1=2B becomes two hours.


I'll start out with taking all exponents in 5k to 2B, which for almost half will be the first P-1 done (though this is not an accurate indicator of further factoring success ;) ) Until 30.8 (or perhaps a further version) comes along with multi-threaded P-1 on small exponents, I'll hold off on doing stage 2 on these exponents. Instead, further increasing B1 with every round seems like the best option for now. I might be able to muster enough patience to bring B1 up with 1 days worth of work per exponent.


@axn, Do you know how the memory usage increases with a higher prime threshold? I assume it doesn't scale linearly in the sense that 500 million would become 100 MB?


@Prime95 Hopefully 3B to 5B gives you some factors! As mentioned above, it might be a bit hard to keep track of reservations for a bit but I've noted you down for 79k, 80k and 81k (if I understood correctly)

axn 2021-12-15 16:59

[QUOTE=lisanderke;595288]@axn, Do you know how the memory usage increases with a higher prime threshold? I assume it doesn't scale linearly in the sense that 500 million would become 100 MB?[/QUOTE]
It should be very nearly linear growth. However, if you're planning on doing B1>1e12, it probably doesn't matter; > 99% of the stage 1 will be using the slower method. Also note that calculating that product itself will take some time, so even if you could (because you have the RAM), you shouldn't try to go too high with that parameter. Practically, you might set that if you're attempting, say, B1<1e10

EDIT:- Note that if you take a number to some B1, and later on increase it, it will use the slower method, Try to go directly to the largest B1 that you can comfortably do with the faster method (say, 1e10)

bur 2021-12-16 12:32

I really like that idea, to me finding new factors of small Mersenne numbers is more interesting than factors of very large Mersenne numbers.

Why is it considered better to just do B1 at first? I there a deeper mathematical/efficiency reason for it, or is it just because new features for stage 2 will come and we want to make use of them?

How exactly are stage 1 results fed to mprime? Do I just keep the files from a B1=B2 run and mprime will automatically notice when I do that exponent again with B2 > B1?

axn 2021-12-16 12:58

[QUOTE=bur;595365]Why is it considered better to just do B1 at first? I there a deeper mathematical/efficiency reason for it, or is it just because new features for stage 2 will come and we want to make use of them?[/quote]
The latter. George is still fine tuning the software for extreme B2. There is one more reason. Stage 2 runs well when you give it more RAM, so currently the software will only run 1 stage 2 at a time, but it can be run multi-threaded. However, stage 1 on small exponents can't be run multithreaded. So to make optimal use of all cores, you run "n" stage 1 in parallel, and then one by one do the stage 2 using all cores.

[QUOTE=bur;595365]How exactly are stage 1 results fed to mprime? Do I just keep the files from a B1=B2 run and mprime will automatically notice when I do that exponent again with B2 > B1?[/QUOTE]

Yep.

bur 2021-12-16 13:50

[QUOTE]So to make optimal use of all cores, you run "n" stage 1 in parallel, and then one by one do the stage 2 using all cores.[/QUOTE]that's good to keep in mind, thanks!

[QUOTE]Instead, further increasing B1 with every round seems like the best option for now.[/QUOTE]I'm not sure I got that right, but is the plan to run consecutive P-1 with increasing bounds on the same exponent? Isn't that a waste of the previous runs?

lisanderke 2021-12-16 14:30

Rookie question I'm afraid, but I'm having trouble to limit P-1 to stage 1 only. What is the most reliable way to limit Prime95 to only do stage 1?

kruoli 2021-12-16 14:37

You will have to exclude the "how far factored" part in the worktodo line and set e.g. B2=B1.
[C]Pminus1=N/A,1,2,29802679,-1,1750000,1750000[/C]

lisanderke 2021-12-16 14:40

Thank you! I had set it to B2=B1 before, but excluded how far factored, and that didn't seem to work. Conversely, I tried B2=0 and how far factored left in, that didn't work either.

kriesel 2021-12-16 14:50

[QUOTE=lisanderke;595375]Thank you! I had set it to B2=B1 before, but excluded how far factored, and that didn't seem to work. Conversely, I tried B2=0 and how far factored left in, that didn't work either.[/QUOTE]how far factored is legit for PFactor, and not allowed in PMinus1. Conversely bounds are legit in Pminus1, and not in PFactor. [url]https://www.mersenneforum.org/showpost.php?p=522098&postcount=22[/url]

lisanderke 2021-12-16 14:50

2 Attachment(s)
Huh. What I wrote before makes no sense since that should have worked (what I claim to have tried first...) Can you tell I've been getting headaches from trying to get this to work? :grin:
Now I've run into another issue, but I believe this is a bug... Please do correct me if this is not a bug but (somehow) expected behavior in 30.8, my workers are stuck at 100% and keep reporting 100% done (but actually never finishing) See attached.
I don't want to stop the workers in fear of corrupting the save file....

kruoli 2021-12-16 14:58

[QUOTE=kriesel;595377]how far factored is legit for PFactor, and not allowed in PMinus1. Conversely bounds are legit in Pminus1, and not in PFactor. [url]https://www.mersenneforum.org/showpost.php?p=522098&postcount=22[/url][/QUOTE]

The first part is not correct. Whatsnew.txt states:
[CODE]ECM and P-1 can find the best B2 value for the amount of memory prime95 is allowed to use. For ECM,
this happens when the worktodo.txt line sets B2=100*B1 which is the default assignment from the PrimeNet
server. For P-1, the best B2 is chosen when the worktodo.txt line specifies the trial factoring depth.
For example, "Pminus1=1,2,20000003,-1,500000,0,70" chooses the best B2 bound for B1=500000 given that
M20000003 has been trial factored to 2^70.[/CODE]

James Heinrich 2021-12-16 15:12

[QUOTE=Prime95;595280]@James: Can the P-1 probability calculator be changed to allow more than 95 bits of TF? Or even better, estimate the proper TF value given the amount of ECM that's been done?[/QUOTE]I don't normally monitor new threads so I didn't see this till it was pointed out to me.
I have changed the max TF to 99 for now, if you want it bigger let me know.
I haven't got a clue how to estimate TF based on ECM?

kruoli 2021-12-16 15:21

[QUOTE=James Heinrich;595382]I haven't got a clue how to estimate TF based on ECM?[/QUOTE]

Take the estimated T-Level that is already on your site, then calculate [$](\text{T-level})/\log_{10}{2}[/$]. This is your value.
Example: M1277 has an estimated T-Level of 62.878. The analog TF bit level would be 208 or 209 (yes, very high!).

kriesel 2021-12-16 15:43

[QUOTE=kruoli;595380]The first part is not correct. Whatsnew.txt states ...[/QUOTE]Thanks, I missed that change at v30.4/30.5, will extend the reference info to reflect it.


(edit: done)

Prime95 2021-12-16 16:55

[QUOTE=lisanderke;595378]
Now I've run into another issue, but I believe this is a bug... .[/QUOTE]

Looks like a bug to me. Hopefully just a harmless reporting issue. Did the runs complete OK?

lisanderke 2021-12-16 17:15

3 Attachment(s)
[QUOTE=Prime95;595388]Looks like a bug to me. Hopefully just a harmless reporting issue. Did the runs complete OK?[/QUOTE]


They did not? They did?
At some point (after way too much time spent on the exponent, half an hour more than I expected them to take) some runs were "completed" and B1=x reported to Primenet. Then, I stopped the remaining exponents (while at 100% for many iterations) and restarted Prime95. That prompted IMMEDIATE reporting of B1=x complete to Primenet (Keep in mind, these runs were not "completed" in the same manner as the previous ones. These ones were "forced" to upload because Prime95 probably recognized they were at 100%?)

Two different kind of runs were uploaded to Primenet:
1. These assignments were reporting a "correct" climb up to 100%. These were then stuck at reporting 100% while seemingly reporting progress (apart from the 100% done) correctly. (in the sense that every x amount of iterations still took around 5 seconds.) Eventually they completed and reported to Primenet

2. These assignments were stopped in the "middle" of reporting 100%. Prime95 restarted, prompting immediate reporting of "results"


Now what amazes me the most is that for some reason, assignments of the second kind got MORE credit given. (the assignments that intuitively should not have "completed" whatever they were doing; also considering there was not as much time spent on these exponents)



I fear my explanation hardly makes any sense, unfortunately I did not save screenshots of the behavior I've just described. What I do have is the "results" reported to Primenet. Maybe someone else can make sense of this. [M]5741[/M] and [M]5717[/M] were both reported after 'completing' a run that had the 100% reporting bug. The rest (with more credit given) were restarted in the middle of this run, after the bug had taken effect (for some time)


Attached the save files (appended .txt to allow me to upload them.). Below results.txt entries. (for reference of how long these runs took, much longer than I expected them to run)

[C][Thu Dec 16 16:18:45 2021]
UID: x/30.8b5, M5717 completed P-1, B1=200000000, Wi4: 1794843D
UID: x/30.8b5, M5741 completed P-1, B1=200000000, Wi4: 17FA8444
[Thu Dec 16 16:22:49 2021]
UID: x/30.8b5, M5903 completed P-1, B1=200000000, Wi4: 1794843E
UID: x/30.8b5, M5861 completed P-1, B1=200000000, Wi4: 17908447
UID: x/30.8b5, M5879 completed P-1, B1=200000000, Wi4: 17FD8448
UID: x/30.8b5, M5851 completed P-1, B1=200000000, Wi4: 17EA843C
[/C]


The headaches keep getting worse and worse. This all happened on 30.8b5 FYI.

James Heinrich 2021-12-16 17:28

[QUOTE=Prime95;595280]Can the P-1 probability calculator be changed to allow more than 95 bits of TF?
Or even better, estimate the proper TF value given the amount of ECM that's been done?[/QUOTE][QUOTE=kruoli;595383]Take the estimated T-Level that is already on your site, then calculate T-level/log10(2). This is your value.[/QUOTE]The calculator will now accept either TF bitlevel [i]or[/i] ECM T-Level as input. If the latter it will be converted to equivalent TF bitlevel.

lisanderke 2021-12-16 18:17

[URL]https://www.mersenne.ca/exponent/5003[/URL]
[URL="https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=33&b1=5000000000&b2=4.4190835712423E%2B15"]https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=33.639&b1=5000000000&b2=4.4190835712423E%2B15[/URL]
Do these values make sense? This is an example of stage 2 completed on 30.8b5

firejuggler 2021-12-16 18:44

What do you consider a low exponent?
Because today, I found a rather big one.
[M]M8538269[/M] has a 129.728-bit (40-digit) factor: [url=https://www.mersenne.ca/M8538269]1127043861162808113814773315610463390639[/url] (P-1,B1=1000000,B2=330325710)

lisanderke 2021-12-16 18:50

[QUOTE=firejuggler;595400]What do you consider a low exponent?
Because today, I found a rather big one.
[M]M8538269[/M] has a 129.728-bit (40-digit) factor: [URL="https://www.mersenne.ca/M8538269"]1127043861162808113814773315610463390639[/URL] (P-1,B1=1000000,B2=330325710)[/QUOTE]
Congratulations! Anything below Co-factor PRP FTC-wavefront is a low exponent to me :smile:

Prime95 2021-12-16 18:53

[QUOTE=lisanderke;595393]
Now what amazes me the most is that for some reason, assignments of the second kind got MORE credit given. [/QUOTE]

Coincidence. The explanation is the larger exponents used a larger FFT length and thus deserved more credit.

Here's some good news. You can check if your save files are good. Create a worktodo.txt entry with a slightly larger B1 and no known factors. Let 30.7b9 run that and see if it finds some or all of the known factors. I think you'll find you're in good shape.

Example: Pminus1=N/A,1,2,5003,-1,200000999,200000999

Prime95 2021-12-16 18:55

[QUOTE=firejuggler;595400]What do you consider a low exponent?
Because today, I found a rather big one.[/QUOTE]

I think of low as sub-100K. Maybe sub-1M.

Nice find, BTW! I'm a little jealous.

charybdis 2021-12-16 18:59

[QUOTE=lisanderke;595397][URL]https://www.mersenne.ca/exponent/5003[/URL]
[URL="https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=33&b1=5000000000&b2=4.4190835712423E%2B15"]https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=33.639&b1=5000000000&b2=4.4190835712423E%2B15[/URL]
Do these values make sense? This is an example of stage 2 completed on 30.8b5[/QUOTE]

Note that many if not all exponents in this range have likely had far more ECM than has been reported. For example, this number has a known 44-digit factor found by ECM, and it's very unlikely that the reported t33.6 would have found this. See [URL="https://www.mersenne.ca/exponent/5333"]M5333[/URL] for a more extreme example.

Looking at the sizes of the [URL="https://www.mersenne.ca/userfactors/ecm/1/rexponent"]known factors[/URL], I'd say all exponents up to ~7500 have probably had at least a t45 (maybe even t50) from Ryan Propper. Ryan doesn't seem to have done any work from 7700-10000 so if I were you I'd focus your efforts there.

lisanderke 2021-12-16 19:03

1 Attachment(s)
[QUOTE=Prime95;595403]Coincidence. The explanation is the larger exponents used a larger FFT length and thus deserved more credit.

Here's some good news. You can check if your save files are good. Create a worktodo.txt entry with a slightly larger B1 and no known factors. Let 30.7b9 run that and see if it finds some or all of the known factors. I think you'll find you're in good shape.

Example: Pminus1=N/A,1,2,5003,-1,200000999,200000999[/QUOTE]
Aha, that explanation is a relief.


I used the following worktodo entry as suggested: Pminus1=N/A,1,2,5903,-1,200000999,200000999
Copied save file for 5903 from 30.8b5 to 30.7b9. And Prime95 turned it into a 'bad' save file.

lisanderke 2021-12-16 19:11

[QUOTE=charybdis;595405]Note that many if not all exponents in this range have likely had far more ECM than has been reported. For example, this number has a known 44-digit factor found by ECM, and it's very unlikely that the reported t33.6 would have found this. See [URL="https://www.mersenne.ca/exponent/5333"]M5333[/URL] for a more extreme example.

Looking at the sizes of the [URL="https://www.mersenne.ca/userfactors/ecm/1/rexponent"]known factors[/URL], I'd say all exponents up to ~7500 have probably had at least a t45 (maybe even t50) from Ryan Propper. Ryan doesn't seem to have done any work from 7700-10000 so if I were you I'd focus your efforts there.[/QUOTE]


Thank you for the insights! Before I move on from the 5k range, though, let me try wrap my head around this:
Let's assume B1=4e12 (as suggested by Zhangrc, no clue what the runtime for B1 would be) and B2=800K times 4e12, and T-level = 50, we have this calculation [URL="https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=50&b1=4000000000000&b2=3.2E%2B18"]https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=50&b1=4000000000000&b2=3.2E%2B18[/URL]
That seems a lot of effort for a very low chance of finding a factor.

charybdis 2021-12-16 19:34

[QUOTE=lisanderke;595408]Thank you for the insights! Before I move on from the 5k range, though, let me try wrap my head around this:
Let's assume B1=4e12 (as suggested by Zhangrc, no clue what the runtime for B1 would be) and B2=800K times 4e12, and T-level = 50, we have this calculation [URL="https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=50&b1=4000000000000&b2=3.2E%2B18"]https://www.mersenne.ca/prob.php?exponent=5003&ecmtlevel=50&b1=4000000000000&b2=3.2E%2B18[/URL]
That seems a lot of effort for a very low chance of finding a factor.[/QUOTE]

Indeed it is. When this much ECM has already been run, it is more efficient to run further ECM rather than P-1 if your aim is solely to find factors. Of course you can only find a record P-1 factor by running P-1 :wink: but I'd still advise you to focus on ranges that Ryan hasn't targeted or you may not find factors at all.

charybdis 2021-12-16 19:38

[URL="https://www.mersenne.ca/exponent/5231"]M5231[/URL] is a good example of a potential huge P-1 factor having already been found by Ryan using ECM.

Prime95 2021-12-16 20:06

[QUOTE=lisanderke;595406]

I used the following worktodo entry as suggested: Pminus1=N/A,1,2,5903,-1,200000999,200000999
Copied save file for 5903 from 30.8b5 to 30.7b9. And Prime95 turned it into a 'bad' save file.[/QUOTE]

My bad. 30.7 cannot read a 30.8 save file. Try your save file and "Pminus1=N/A,1,2,5903,-1,200000999,200000999" with 30.8.

lisanderke 2021-12-16 22:46

1 Attachment(s)
[QUOTE=Prime95;595416]My bad. 30.7 cannot read a 30.8 save file. Try your save file and "Pminus1=N/A,1,2,5903,-1,200000999,200000999" with 30.8.[/QUOTE]
Oh, that did finish the P-1, but it didn't report a factor. Could it be that the save file already knows the 8 factors?

lisanderke 2021-12-16 23:00

I just tried the same line with the other exponents in my previous list and they all report 'complete' with no mention of any factors. Below results.txt (both B1 ending in triple zero and ending in triple 9 (with no known factors). I have kept the save files of these exponents from after the run with the 100% complete reporting bug as well as the ones after the run with no known factors.


[C][Thu Dec 16 16:18:45 2021]
UID: lisander/30.8b5, M5717 completed P-1, B1=200000000, Wi4: 1794843D
UID: lisander/30.8b5, M5741 completed P-1, B1=200000000, Wi4: 17FA8444
[Thu Dec 16 16:22:49 2021]
UID: lisander/30.8b5, M5903 completed P-1, B1=200000000, Wi4: 1794843E
UID: lisander/30.8b5, M5861 completed P-1, B1=200000000, Wi4: 17908447
UID: lisander/30.8b5, M5879 completed P-1, B1=200000000, Wi4: 17FD8448
UID: lisander/30.8b5, M5851 completed P-1, B1=200000000, Wi4: 17EA843C
[Thu Dec 16 23:42:24 2021]
UID: lisander/30.8b5, M5903 completed P-1, B1=200000999, Wi4: 17948BF0
[Thu Dec 16 23:44:13 2021]
UID: lisander/30.8b5, M5903 completed P-1, B1=200000999, Wi4: 17948BF0
[Thu Dec 16 23:53:14 2021]
UID: lisander/30.8b5, M5741 completed P-1, B1=200000999, Wi4: 17FA8B8A
UID: lisander/30.8b5, M5851 completed P-1, B1=200000999, Wi4: 17EA8BF2
UID: lisander/30.8b5, M5861 completed P-1, B1=200000999, Wi4: 17908B89
UID: lisander/30.8b5, M5879 completed P-1, B1=200000999, Wi4: 17FD8B86
UID: lisander/30.8b5, M5903 completed P-1, B1=200000999, Wi4: 17948BF0
[Thu Dec 16 23:59:21 2021]
UID: lisander/30.8b5, M5717 completed P-1, B1=200000999, Wi4: 17948BF3[/C]

ATH 2021-12-16 23:32

[QUOTE=kruoli;595383]Take the estimated T-Level that is already on your site, then calculate [$](\text{T-level})/\log_{10}{2}[/$]. This is your value.
Example: M1277 has an estimated T-Level of 62.878. The analog TF bit level would be 208 or 209 (yes, very high!).[/QUOTE]

I took M1277 P-1 to B1=10[SUP]12[/SUP] and B2=2.3*10[SUP]17[/SUP] many years ago with GMP-ECM.

Other people might have gone higher.

ATH 2021-12-16 23:44

Here are some P-1 files if someone wants to take them higher B1 at some point. I was just doing random tests of v30.8:
[URL="http://hoegge.dk/mersenne/P-1files.zip"]P-1files.zip[/URL]

I was using Pminus1BestB2=0 in prime.txt choosing my own B1/B2 limits, that is why I did not change the 66 bits factored to.

[CODE]
1567 B1=50*10[SUP]9[/SUP] B2=10[SUP]16[/SUP] Pminus1=1,2,1567,-1,50000000000,10000000000000000,"29257873823,488519841428152561,31523570928926921025805379993"
1583 B1=30*10[SUP]9[/SUP] B2=10[SUP]16[/SUP] Pminus1=1,2,1583,-1,30000000000,10000000000000000,"3167,189961,8589359,43817441"
10303 B1=25*10[SUP]9[/SUP] No stage2 Pminus1=1,2,10303,-1,25000000000,25000000000,66
32911 B1=5*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=1,2,32911,-1,5000000000,100000000000000,66
33091 B1=5*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=1,2,33091,-1,5000000000,100000000000000,66
33151 B1=5*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,33151,-1,5000000000,100000000000000,66
33161 B1=5*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,33161,-1,5000000000,100000000000000,66
33587 B1=5*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,33587,-1,5000000000,100000000000000,66
41927 B1=4*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,41927,-1,4000000000,100000000000000,66
44549 B1=2*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,44549,-1,2000000000,100000000000000,66
48119 B1=3*10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,48119,-1,3000000000,100000000000000,66
49871 B1=10[SUP]9[/SUP] B2=10[SUP]14[/SUP] Pminus1=N/A,1,2,49871,-1,1000000000,100000000000000,66

[/CODE]

SethTro 2021-12-17 00:13

[QUOTE=James Heinrich;595394]The calculator will now accept either TF bitlevel [I]or[/I] ECM T-Level as input. If the latter it will be converted to equivalent TF bitlevel.[/QUOTE]

Thanks James this is great!

One small suggestion could you make the Factor Probability column on the exponent page a link to the prob calculator with the values of that row filled in?

I think it might be cleanest if you style it so the link color is black (unless hovered over)

firejuggler 2021-12-17 00:25

Also, please refer to [url]https://www.mersenne.ca/userfactors/pm1/1/bits[/url] for "big" PM1 results.

Zhangrc 2021-12-17 00:31

[QUOTE=lisanderke;595408]Let's assume B1=4e12 (as suggested by Zhangrc, no clue what the runtime for B1 would be) and B2=800K times 4e12, and T-level = 50[/QUOTE]
That needs a month or more for stage 1, but you can run 6 to 8 stage 1 in parallel, it's not a big issue.
Stage 2 needs about the same time, but you'd better decrease it to 1/6 because it can't run in parallel. so average is one exponent per week.

Prime95 2021-12-17 00:31

[QUOTE=lisanderke;595429]Oh, that did finish the P-1, but it didn't report a factor. Could it be that the save file already knows the 8 factors?[/QUOTE]

That's not good. The save file knows nothing about the known factors.

Do the save files prior to the 100% reporting find known factors when the B1 bound is increased by 999?

lisanderke 2021-12-17 00:36

I'm currently looking at the 40k range. I count 82 exponents from 40k-41k. I'll probably attempt to take them to B1=10^9 in anticipation of better optimized multi-threaded stage 2. Below an example of what probabilities I'm expecting (I'm rounding T-Level up from estimates on mersenne.ca);
No stage 2: [URL]https://www.mersenne.ca/prob.php?exponent=40009&ecmtlevel=34&b1=1000000000[/URL]
Stage 2 with B2=B1x800k: [URL]https://www.mersenne.ca/prob.php?exponent=40009&ecmtlevel=34&b1=1000000000&b2=8.0E%2B14[/URL]
With previous posts in mind, this seems like a good range to throw my hat at! I might consider going higher than B1=10^9, but for now I'd like to start saving up lots of good B1-only save files. I'm eager to hear if there are other things to consider when P-1'ing in this range! From my due diligence, I assume Ryan Propper has probably not done as extensive ECM as they've done in the 5k range. [URL]https://www.mersenne.ca/userfactors/ecm/45930/exponent[/URL]

lisanderke 2021-12-17 00:40

[QUOTE=Prime95;595441]That's not good. The save file knows nothing about the known factors.

Do the save files prior to the 100% reporting find known factors when the B1 bound is increased by 999?[/QUOTE]


I only have save files for the exponents in question from either when 100% reporting finished on its own, or I stopped the worker still reporting 100% (thus dumping progress to the save file of said exponents.) Unless you mean other exponents that I took to a certain B1, prior to when I was told how to make Prime95 do only B1 (as in; it was simply skipping stage 2 for those save files)

lisanderke 2021-12-17 00:50

1 Attachment(s)
I'll post the save files of those exponents that experienced the 100% reporting in this zip archive. If anyone wishes to take a crack at these be my guest :smile:. I'm off to bed now.

SethTro 2021-12-17 01:05

Something I've wanted for a long time and now more than ever I think we need is a repository of P-1 save files.

I have a bit of free time to work on this and I think it has several parts

1. Tool to extract current bounds from a save file [90% done [URL="https://mersenneforum.org/showthread.php?t=25378"]MF post[/URL], [URL="https://github.com/sethtroisi/misc-scripts/tree/main/mersenne/status_report"]Code[/URL]]
2. Some extra code to compare two save files [50% done [URL="https://github.com/sethtroisi/misc-scripts/blob/main/mersenne/status_report/filter.py"]Code[/URL]]
3. Webpage with status page & download links [20% done see [URL="https://github.com/sethtroisi/misc-scripts/blob/main/mersenne/status_report/prime95_status.py#L498"]one_line_status[/URL]]
- I'm happy to write and maintain this for a year or so but ultimately it would be nice if James was willing to add it to their page.
4. Upload new results page
- Something simple like [url]https://primegaps.cloudygo.com/[/url] that can take a new save file and replace the existing file if better
- Automate George's trick to verify factors are found (increase B1 by 1000 and test that stage 1 factors are found)

If people were interested in helping out I'd like to start collecting all known save files for exponents less than say 1M.
If you have a large number of save files and want to help I'd love to hear from you.

James Heinrich 2021-12-17 01:12

[QUOTE=SethTro;595438]could you make the Factor Probability column on the exponent page a link to the prob calculator with the values of that row filled in?[/QUOTE][URL="https://memegenerator.net/instance/81318243"]SLIBRSLIB[/URL]-Done.

nordi 2021-12-17 01:37

[QUOTE=charybdis;595411][URL="https://www.mersenne.ca/exponent/5231"]M5231[/URL] is a good example of a potential huge P-1 factor having already been found by Ryan using ECM.[/QUOTE]
I would see the positive side here: The fact that this factor was found by ECM shows that no in-depth P-1 was run on this number, and likely other numbers in that range. At least not prior to October 2020, when that factor was discovered.

Batalov 2021-12-17 06:01

[QUOTE=lisanderke;595429]Oh, that did finish the P-1, but it didn't report a factor. Could it be that the save file already knows the 8 factors?[/QUOTE]
Just in case (because reading this thread is full of surprises, like "oh, he also doesn't know that", "oh, and that too", but surely the attitude is "what is that qualifying for regional/divisional college competition? ...No, I will go straight for the world record" "...Send me to Beijing, right now!")
-->
this is what you are competing against - [url]https://members.loria.fr/PZimmermann/records/Pminus1.html[/url]
Stop thinking in bits, like you will find in [URL="https://mersenneforum.org/showthread.php?t=13977"]many vanity threads[/URL]: "Ah! Look at that factor, it is exceptionally large at 134.47 bits!"
No, [B]66 [/B]digits (or [B]218[/B] bits) is what you want to beat. Good luck!

SethTro 2021-12-17 06:09

1 Attachment(s)
[QUOTE=kruoli;595383]Take the estimated T-Level that is already on your site, then calculate [$](\text{T-level})/\log_{10}{2}[/$]. This is your value.
Example: M1277 has an estimated T-Level of 62.878. The analog TF bit level would be 208 or 209 (yes, very high!).[/QUOTE]

For TF 100% of factors less than 2^67 have been found, and 0% of factors greater than 2^67.
For ECM T-level (say 200 bits) is the point where 50% of factors near that should have been found, at 195 bits maybe 80% should have been found and at 205 maybe 25% have been found.


Spitballing it looks like subtracting one from the log10 T-level converted to a log2 bit level is about the right correction to account for the asymmetry.

kruoli 2021-12-17 07:06

[QUOTE=SethTro;595446]If you have a large number of save files and want to help I'd love to hear from you.[/QUOTE]

petrw1 and masser gave me tons of files some time ago, but mostly from the <2K project. For these, I was planning something similar to your ideas. But I have not put much work into it yet.

bur 2021-12-17 07:15

[quote]Instead, further increasing B1 with every round seems like the best option for now.[/quote]
I'm not sure I got that right, but is the plan to run consecutive P-1 with increasing bounds on the same exponent? Isn't that a waste of the previous runs? Or does round apply to ranges of larger exponents?

ATH 2021-12-17 07:16

[QUOTE=SethTro;595446]If people were interested in helping out I'd like to start collecting all known save files for exponents less than say 1M.
If you have a large number of save files and want to help I'd love to hear from you.[/QUOTE]

I have quite a few P+1 save files, are those of interest as well?

Prime95 2021-12-17 07:22

[QUOTE=Batalov;595456]Just in case (because reading this thread is full of surprises, like "oh, he also doesn't know that", "oh, and that too", but surely the attitude is "what is that qualifying for regional/divisional college competition? ...No, I will go straight for the world record" ...

[B]66 [/B]digits (or [B]218[/B] bits) is what you want to beat. Good luck![/QUOTE]

I think top 3 is quite likely (no I didn't do any calculations to back that up!). If we take all Mersennes below 100K to B1=10^9, B2=10^13 (or better) and consider that we get 5 free digits, that's a lot of chances to get lucky.

Even if a record is not found, any new factors are most welcome. One could even lead to a newly fully factored Mersenne.

IMO, that makes the effort worthwhile.

bur 2021-12-17 07:41

[QUOTE=Batalov;595456]this is what you are competing against - [url]https://members.loria.fr/PZimmermann/records/Pminus1.html[/url]
Stop thinking in bits, like you will find in [URL="https://mersenneforum.org/showthread.php?t=13977"]many vanity threads[/URL]: "Ah! Look at that factor, it is exceptionally large at 134.47 bits!"
No, [B]66 [/B]digits (or [B]218[/B] bits) is what you want to beat. Good luck![/QUOTE]What's the difference between thinking in log_10 vs log_2? And one might argue that the former website is but a vanity list as well. Just for the big boys with lots of university/company ressources.

lisanderke 2021-12-17 08:44

[QUOTE=Prime95;595441]That's not good. The save file knows nothing about the known factors.

Do the save files prior to the 100% reporting find known factors when the B1 bound is increased by 999?[/QUOTE]
Exponent [M]5003[/M] had B1=5x10^9 and B2=4,4x10^15. Re-ran with B1=5x10^9 (ending in 999) for the following result:

[C][Thu Dec 16 00:35:50 2021]
UID: lisander/30.8b5, M5003 completed P-1, B1=5000000000, B2=4419083571242340, Wi4: 3CB31752
[Fri Dec 17 09:35:27 2021]
P-1 found a factor in stage #1, B1=5000000999.
UID: lisander/30.8b5, M5003 has a factor: 3281293532195321113569357982360497098983754047729275907049 (P-1, B1=5000000999)
[/C]
Question: why does my composite factor show up in the results history of this exponent? Shouldn't Primenet already have been aware of this composite factor?

LaurV 2021-12-17 09:12

Is this "new activity" (since I almost disappeared from the forum*) a reason for the "strange" lifting I see in P-1 tops?

I suspected for a while that we have some cheaters there, I mean people jumping hundreds of positions in a week or two, but when you look to their activity, they have very low number of trials, which does not justify the high scores at all, with no possible calculus, unless they did really, and I mean REALLY, deep, DEEP, P-1 (i.e. to very high limits B1 and B2, to get that huge amount of credits, but, BUT, in that case, they were ridiculously unlucky with only 2% (and under) factors found.

So, they either did low-bounds P-1 to find so less factors, which doesn't match with the high credit, or they did high-bounds P-1, which would fit with the credit given, but it doesn't fit with the number of factors.

Or they are cheating (it is easy to cheat the P-1 "no factor" results).

One situation that would still fit, with no cheating, is people doing a lot of P-1 in the very low ranges (which were swapped by TF and lots of ECM already). Is that what's happening here?


---------
*the reason of my "almost disappearance" from the forum I will post later, it is a funny story, not related to math, need time to type...

SethTro 2021-12-17 09:24

[QUOTE=kruoli;595459]petrw1 and masser gave me tons of files some time ago, but mostly from the <2K project. For these, I was planning something similar to your ideas. But I have not put much work into it yet.[/QUOTE]

[QUOTE=ATH;595461]I have quite a few P+1 save files, are those of interest as well?[/QUOTE]

I'll reach out to both of you next week as I make more progress and start adding upload functionality.

[B]Rough website with list of current files I have: [/B] [url]https://pminus1.cloudygo.com/[/url]

ATH 2021-12-17 09:57

[QUOTE=lisanderke;595466]Question: why does my composite factor show up in the results history of this exponent? Shouldn't Primenet already have been aware of this composite factor?[/QUOTE]

Primenet is aware but your local Prime95 is not, you should add the known factors in the worktodo line:

Pminus1=N/A,1,2,5003,-1,5000000000,4400000000000000,"10007,1050631,6087330966532976879,51270087288962073254717551943,68882595903330238838552613955694480987987527"

lisanderke 2021-12-17 09:58

[QUOTE=ATH;595470]Primenet is aware but your local Prime95 is not, you should add the known factors in the worktodo line:

Pminus1=N/A,1,2,5003,-1,5000000000,4400000000000000,"10007,1050631,6087330966532976879,51270087288962073254717551943,68882595903330238838552613955694480987987527"[/QUOTE]
Oops, you misunderstood me. I meant, why does my composite factor show up here [url]https://www.mersenne.org/report_exponent/?exp_lo=5003&full=1[/url]

lisanderke 2021-12-17 10:10

1 Attachment(s)
[QUOTE=LaurV;595467]Is this "new activity" (since I almost disappeared from the forum*) a reason for the "strange" lifting I see in P-1 tops?

I suspected for a while that we have some cheaters there, I mean people jumping hundreds of positions in a week or two, but when you look to their activity, they have very low number of trials, which does not justify the high scores at all, with no possible calculus, unless they did really, and I mean REALLY, deep, DEEP, P-1 (i.e. to very high limits B1 and B2, to get that huge amount of credits, but, BUT, in that case, they were ridiculously unlucky with only 2% (and under) factors found.

So, they either did low-bounds P-1 to find so less factors, which doesn't match with the high credit, or they did high-bounds P-1, which would fit with the credit given, but it doesn't fit with the number of factors.

Or they are cheating (it is easy to cheat the P-1 "no factor" results).

One situation that would still fit, with no cheating, is people doing a lot of P-1 in the very low ranges (which were swapped by TF and lots of ECM already). Is that what's happening here?


---------
*the reason of my "almost disappearance" from the forum I will post later, it is a funny story, not related to math, need time to type...[/QUOTE]
Hi there! A Pre-beta version of Prime95 v30.8 is out ("out" is a big word here, it is meant ONLY for P-1, not any other workload with the current latest build, b5, included.) This version has been pushed out as a pre-beta for P-1 only for the sub two k project specifically. It boasts vast improvements to stage 2 on small(er) exponents. Hence the formerly outrageous bounds you may have seen, see the attachment some of my own testing with 30.8b5 :smile: If you're interested in trying out this pre-beta, I suggest you check out the following thread: [URL]https://www.mersenneforum.org/showthread.php?t=27366[/URL]

lisanderke 2021-12-17 10:24

[QUOTE=SethTro;595468]I'll reach out to both of you next week as I make more progress and start adding upload functionality.

[B]Rough website with list of current files I have: [/B] [URL]https://pminus1.cloudygo.com/[/URL][/QUOTE]
Be sure to keep this thread in the loop! Let us know when some feature/functionality is added/turned on so that we're able to exchange P-1 efforts specifically for this project. Thank you for the work done so far!

lisanderke 2021-12-17 10:27

[QUOTE=Batalov;595456]Just in case (because reading this thread is full of surprises, like "oh, he also doesn't know that", "oh, and that too", but surely the attitude is "what is that qualifying for regional/divisional college competition? ...No, I will go straight for the world record" "...Send me to Beijing, right now!")
-->
this is what you are competing against - [URL]https://members.loria.fr/PZimmermann/records/Pminus1.html[/URL]
Stop thinking in bits, like you will find in [URL="https://mersenneforum.org/showthread.php?t=13977"]many vanity threads[/URL]: "Ah! Look at that factor, it is exceptionally large at 134.47 bits!"
No, [B]66 [/B]digits (or [B]218[/B] bits) is what you want to beat. Good luck![/QUOTE]
I'm afraid I must not have clarified enough that this project would limit itself to Mersenne numbers. With that in mind,

[QUOTE=firejuggler;595439]Also, please refer to [URL]https://www.mersenne.ca/userfactors/pm1/1/bits[/URL] for "big" PM1 results.[/QUOTE]
The 'bit-size' to beat is 173,22. Thus making P-1 efforts done in ranges where ECM-factors of sometimes much higher (say 240) bit levels have been found, quite obsolete. (For my particular search and patience levels).


Though I quite agree with the following statements!
[QUOTE=Prime95;595462]I think top 3 is quite likely (no I didn't do any calculations to back that up!). If we take all Mersennes below 100K to B1=10^9, B2=10^13 (or better) and consider that we get 5 free digits, that's a lot of chances to get lucky.

Even if a record is not found, any new factors are most welcome. One could even lead to a newly fully factored Mersenne.

IMO, that makes the effort worthwhile.[/QUOTE]

firejuggler 2021-12-17 12:21

1 Attachment(s)
I had low hope and I was right, total time, about 27000 seconds (B1 took 11000seconds , B2 15600)

lisanderke 2021-12-17 12:42

[QUOTE=firejuggler;595479]I had low hope and I was right, total time, about 27000 seconds (B1 took 11000seconds , B2 15600)[/QUOTE]
The low hope was probably justified, according to mersenne.ca with T-level 58:

[URL]https://www.mersenne.ca/prob.php?exponent=1811&ecmtlevel=58&b1=40840606334&b2=3.7221820308052E%2B16[/URL]
Probability = [B]0.023413%[/B]

James Heinrich 2021-12-17 14:36

[QUOTE=lisanderke;595466]why does my composite factor show up in the results history of this exponent?
Shouldn't Primenet already have been aware of this composite factor?[/QUOTE]Why wouldn't it show up in the exponent history? Maybe you didn't find any new factors, but we still need to record the space that you searched and what you found in it.

techn1ciaN 2021-12-17 15:04

[QUOTE=lisanderke;595466]Question: why does my composite factor show up in the results history of this exponent? Shouldn't Primenet already have been aware of this composite factor?[/QUOTE]

You're experiencing this problem: [URL]https://www.mersenneforum.org/showthread.php?t=27346[/URL]

[QUOTE=Batalov;595456]...reading this thread is full of surprises, like "oh, he also doesn't know that", "oh, and that too", but surely the attitude is "what is that qualifying for regional/divisional college competition? ...No, I will go straight for the world record"...[/QUOTE]

I'll say that I admire Mr. Viaene's tenacity in being willing to jump right in, get something rolling even if it's ambitious, and learn as he goes. Given the extent of advice already offered, I don't think much would have been gained from him cautiously poking the topic for months and making absolutely sure he had a completely firm handle on it before starting a thread. At the end of the day GIMPS searches for prime numbers for fun; as Mr. Woltman points out, the absolute worst "failure" case is finding a few interesting factors and "wasting" (open to interpretation) a few cycles.

VBCurtis 2021-12-17 16:40

[QUOTE=bur;595460]I'm not sure I got that right, but is the plan to run consecutive P-1 with increasing bounds on the same exponent? Isn't that a waste of the previous runs? Or does round apply to ranges of larger exponents?[/QUOTE]

A Stage-1 save file can be resumed from that B1 to a higher B1 without losing any work. So, maybe?

Also, one could run stage 1 on small-memory machines to rather big bounds, and post the work to a repository where someone with big-memory could run stage 2 to quite high bounds.

lisanderke 2021-12-17 19:49

[QUOTE=James Heinrich;595491]Why wouldn't it show up in the exponent history? Maybe you didn't find any new factors, but we still need to record the space that you searched and what you found in it.[/QUOTE]
:picard: Should've realized, that makes sense! :grin:

[QUOTE=techn1ciaN;595495]
I'll say that I admire Mr. Viaene's tenacity in being willing to jump right in, get something rolling even if it's ambitious, and learn as he goes. Given the extent of advice already offered, I don't think much would have been gained from him cautiously poking the topic for months and making absolutely sure he had a completely firm handle on it before starting a thread. At the end of the day GIMPS searches for prime numbers for fun; as Mr. Woltman points out, the absolute worst "failure" case is finding a few interesting factors and "wasting" (open to interpretation) a few cycles.[/QUOTE]
Thanks for the kind words! Another reason for jumping in before doing lots more research by myself was that I knew there had been many before me trying (and perhaps succeeding in doing) the same thing! I now know it was the right decision to dive straight in, instead of trying to figure out a lot of things on my own. I have previous posts in this thread to prove that lots of people are keen to help others understand these things better. :smile: (I did count on others helping me from the start, as I pointed out in my original post:)
[QUOTE]With all of that out of the way, I need help devising up proper bounds for all of these ranges, exponents... As I've said before, the math escapes me, but hopefully I can learn from this project![/QUOTE]

Batalov 2021-12-17 20:03

[QUOTE=lisanderke;595475]I'm afraid I must not have clarified enough that this project would limit itself to Mersenne numbers. With that in mind,
...The 'bit-size' to beat is 173,22. [/QUOTE]
Nope, higher.
58 digits prime factor is a factor of a [URL="https://mathworld.wolfram.com/MersenneNumber.html"]Mersenne number[/URL] and was found with P-1 (in 2005, too). That's [URL="http://factordb.com/index.php?id=1100000000012855105"]189.80 bits[/URL].

Also, keep in mind that composite factors were and should remain ineligible. One can trivially find insanely large composite factors with P-1. One [I]"excellent" [/I]way to find them is indeed not include already known factors while queueing a "World Record" job.
___

This little project is somewhat reminiscent of "[URL="https://en.wikipedia.org/wiki/Rosencrantz_and_Guildenstern"]Rosencrantz and Guildenstern[/URL] are dead". With good execution, it might be excellent. With a sophomoric implementation, it might be mediocre.
How is it reminiscent? Simple. We take a popular play and pick totally decorative characters and make them the center characters for "your" play. For GIMPS, P-1 is simply a crutch, a tool; a fast an relevant way to remove the chaff. For this project, this is suddenly pretending to be the proper play important on its own right. "Let's create a World Record for a general method, but only when it is applied to a tiny fraction of its possible uses." = "Let's set up and start fighting for a World Record in long jumps with both hands tied behind the jumpers' backs." Cute, but only as a joke. My 2 cents.

I understand George's interest, sure. It is generally called "riding out a hype", that is -- it doesn't matter why hype evolved. As long as it can help the main project, that's great. That is a valid reason, I have nothing against that. And also, a great reason to revisit and debug a rarely visited branch of the code. :rolleyes:

lisanderke 2021-12-17 20:22

[QUOTE=Batalov;595526]Nope, higher.
58 digits prime factor is a factor of a [URL="https://mathworld.wolfram.com/MersenneNumber.html"]Mersenne number[/URL] and was found with P-1 (in 2005, too). That's 189.80 bits.

Also, keep in mind that composite factors were and should remain ineligible. One can trivially find insanely large composite factors with P-1. One [I]"excellent" [/I]way to find them is indeed not include already known factors while queueing a "World Record" job.[/QUOTE]


Ah, Mersenne Numbers with prime exponents then, those that are listed one mersenne.ca and mersenne.org. That is what I meant. Apologies for assuming this was implied.

Also, you may be confusing the main topic of this thread with another conversation that was started between George Woltman and I about some weird behavior of Prime95 I had been having with v30.8b5. I am aware that composite factors found with P-1 are not prime factors, and, consequently, not eligible for taking the '[B]World Record Title for P-1 Prime Factor Found on Mersenne Number With Prime Exponent.[/B]" Sigh. See below for the reason I posted a composite factor result:

[QUOTE=Prime95;595403]-snip- Here's some good news. You can check if your save files are good. Create a worktodo.txt entry with a slightly larger B1 and [B]no known factors[/B]. Let 30.7b9 run that and see if it finds some or all of the known factors. I think you'll find you're in good shape.

Example: Pminus1=N/A,1,2,5003,-1,200000999,200000999[/QUOTE]

[QUOTE=lisanderke;595406]-snip- I used the following worktodo entry as suggested: Pminus1=N/A,1,2,5903,-1,200000999,200000999
Copied save file for 5903 from 30.8b5 to 30.7b9. And Prime95 turned it into a 'bad' save file.[/QUOTE]
[QUOTE=Prime95;595416]My bad. 30.7 cannot read a 30.8 save file. Try your save file and "Pminus1=N/A,1,2,5903,-1,200000999,200000999" with 30.8.[/QUOTE]

[QUOTE=lisanderke;595429]Oh, that did finish the P-1, but it didn't report a factor. Could it be that the save file already knows the 8 factors?[/QUOTE]

[QUOTE=Prime95;595441]That's not good. The save file knows nothing about the known factors.

Do the save files prior to the 100% reporting find known factors when the B1 bound is increased by 999?[/QUOTE]
The rest is history :)


EDIT: In retrospect, I could have prompted @Batalov to simply re-read parts of this thread to make them better understand why I posted a composite factor result in this particular thread. Though this probably makes for a more pleasant reading experience.

Batalov 2021-12-17 20:29

[QUOTE=lisanderke;595528]EDIT: In retrospect, I could have prompted @Batalov to simply re-read parts of this thread to make them better understand why I posted a composite factor result in this particular thread.This way it's probably easier to follow than a re-reading a conversation with lots of other posts going on in between![/QUOTE]
Retrospects are great! :thumbs-up:
Because in retrospect you are simply retelling what was already rotating on this forum 10 years ago, 5 years ago. Every few years. But did we recommend you to simply scan the forum and re-read the old threads? No, that would be cruel. Why on earth would you want to do that. :rolleyes: you are a writer, not a reader.

lisanderke 2021-12-17 20:41

[QUOTE=Batalov;595530]-snip- you are a writer, not a reader.[/QUOTE]
On the flip side, writing gives me something to do while I take range 40k to B1=10^9. Speaking of, I'll have this completed by the end of the weekend (approx.) and I might make some more attempts at stage 2 for these exponents with 30.8b5 (since 5k was giving me such a hard time!)

chalsall 2021-12-17 20:52

[QUOTE=lisanderke;595532]On the flip side, writing gives me something to do...[/QUOTE]

I /think/ you might have missed Batalov's point.

Put another way, [URL="https://liberalarts.vt.edu/magazine/2017/history-repeating.html"]History Repeating[/URL], [URL="https://en.wikipedia.org/wiki/Prior_art"]Prior art[/URL], etc, etc, etc.

You are relatively new here. And have bitten off quite a bit.

We tend to be rather patient around these here parts for new ideas and new people.

But our patience is somewhat limited when the seriousness is not understood and appreciated by those who choose to play in this space.

FWIW...

lisanderke 2021-12-17 20:56

[QUOTE=chalsall;595533]I /think/ you might have missed Batalov's point.

Put another way, [URL="https://liberalarts.vt.edu/magazine/2017/history-repeating.html"]History Repeating[/URL], [URL="https://en.wikipedia.org/wiki/Prior_art"]Prior art[/URL], etc, etc, etc.

You are relatively new here. And have bitten off quite a bit.

We tend to be rather patient around these here parts for new ideas and new people.

But our patience is somewhat limited when the seriousness is not understood and appreciated by those who choose to play in this space.

FWIW...[/QUOTE]Oh, I quite got the point. I just saw no reason to reply to it specifically and I'd rather keep this thread on topic. With all due respect, I value everyone's patience but I'm not going to re-iterate the many thanks I've given to those that deserve it, for their patience, and redirect my thanks to Batalov for pointing out the obvious instead. (Amongst the obvious things being: Yes, it's been done before. Yes, I don't know more than most people posting here. Yes, I'm eager to do things I don't quite understand.)

chalsall 2021-12-17 21:10

[QUOTE=lisanderke;595535]Yes, I'm eager to do things I don't quite understand.[/QUOTE]

Excellent. That is how one learns.

However...

Please understand that others might understand things better than you do. The counsel given was sound. And you seemed to ignore it.

lisanderke 2021-12-17 21:31

[QUOTE=chalsall;595536]Excellent. That is how one learns.

However...

Please understand that others might understand things better than you do. The counsel given was sound. And you seemed to ignore it.[/QUOTE]


See, the way you put it was much easier to understand and much nicer to read. In reference to the counsel given, I'd rather not re-read years worth of posting, many of which diverges from the topic at hand quite quickly (these posts being one of those examples!) making it much harder to find relevant information. Especially if it's a simple and short question. I'd rather 'write' first (when comprehensive tools such as google, the search function on this forum itself, which does not even seem to take "P-1" as a valid search term, software documentation and the sort don't provide me with an answer) and 'read' the reply from someone that already possesses the information I need afterwards. If someone else prefers to search and read through decades worth of posting in this particular forum, be my guest.

chalsall 2021-12-17 21:52

[QUOTE=lisanderke;595538]If someone else prefers to search and read through decades worth of posting in this particular forum, be my guest.[/QUOTE]

What you don't seem to appreciate is this is how claims of prior art works. It can be a bit onerous (which is exactly the plan).

Have you ever tried to claim a patent on an original idea? Teams of people are involved just reading tonnes of language. And even then you need to reserve a few million dollars to defend the claim.

Please forgive me for this, but I sometimes [URL="https://www.youtube.com/watch?v=3qqE_WmagjY"]try to be funny[/URL]. Or, at least, be amusing...

SethTro 2021-12-17 23:39

[QUOTE=lisanderke;595474]Be sure to keep this thread in the loop! Let us know when some feature/functionality is added/turned on so that we're able to exchange P-1 efforts specifically for this project. Thank you for the work done so far![/QUOTE]

I got thirty minutes to work on this today

* I added all the files I have (which includes backups that should be filtered)
* You can now download the files

Next up is sorting and filtering the existing files.

techn1ciaN 2021-12-18 00:50

[QUOTE=Batalov;595526]For GIMPS, P-1 is simply a crutch, a tool; a fast an relevant way to remove the chaff. For this project, this is suddenly pretending to be the proper play important on its own right.[/QUOTE]

Do you have a similar opinion about petrw1's sub-2,000 project?

Some GIMPS members are more interested in factoring than in the "proper" Mersenne prime search. These members aren't declaring factoring to be categorically more important; they just personally prefer to do this type of work. Inasmuch as work done [I]is[/I] a matter of personal preference, GIMPS isn't entitled to have anyone use their cycles in any particular way.

Obviously, the grand majority of GIMPS throughput is "relevant" TF, P-1, and primality testing. I don't think this makes it a sin for small groups to occasionally coordinate using the database for something else that they find interesting. (If nothing else, PRP-CF is an official work type and PrimeNet will happily hand out factoring AIDs for exponents far below the DC and PRP wavefronts, so it would be hard to say that GIMPS has any firm position against "useless" factoring.)

[QUOTE=chalsall;595533]I /think/ you might have missed Batalov's point.[/QUOTE]

In my opinion, Batalov could have given Mr. Viaene the basic respect of choosing to make this point directly instead of with backhanded sarcasm, which is inherently open to misinterpretation. Sarcasm almost never conveys well on an Internet forum, and some individuals (such as myself) can have significant difficulty with it even when full context is available.

Batalov 2021-12-18 02:42

As one member of the forum used to say, "You can lead a horse to water but you can't make him drink."
[SPOILER]It is of course the easiest to not even lead them to the water. What would be the use?[/SPOILER]

Far too many people believe that mersenneforum is the sole extension to mersenne.org. It isn't - solely; it grew up to be a lot of little clubs. But newcomers come and they don't want to read/search through the forum. "They have never played the violin but they don't see what is the big deal. Of course they can. And they will break a world record while doing that, too".

No, I didn't suggest to read just the forum. Read mathworld, read OEIS, read Pomerance's book, for ${Deity}'s sake, read [I]something[/I], anything at all. The danger of math is that people keep thinking that there is a king's road though it, for centuries they have. "I have thought of something that no one ever thought before." No.
"So, I have to make some errors so that I could regret them?" -- "You don't have to make errors to regret them. You have already done plenty. You simply don't know what they are." (paraphrasing "Big Kahuna")

And if you did just a little of that, then you would easily realize that people who do [I]anything [/I]below 57,000,000 aren't doing anything at all for GIMPS - altogether. They are on their own. They are searching for additional factors for the numbers that have been eliminated 20+ years ago. Some people realize that what they are trying to do is identical to helping the Cunnigham project. But not really. Most of them have no idea what I just said. That's sad.

Sarcasm? What sarcasm? Reality, my friend.
:answers:
[QUOTE=techn1ciaN;595557]Do you have a similar opinion about petrw1's sub-2,000 project? [/QUOTE]
Never heard of it.
Wayne probably knows what he is doing (he's been around longer than 20 random accounts combined) and I haven't heard him using too much hyperbole.

techn1ciaN 2021-12-18 04:02

[QUOTE=Batalov;595563]...you would easily realize that people who do [I]anything [/I]below 57,000,000 aren't doing anything at all for GIMPS - altogether.[/QUOTE]

I don't disagree. I just don't understand what the point of complaining about that is supposed to be.

In the first place, GIMPS is an interest project, done for fun. This isn't to say it can't have practical impacts — Prime95's immense popularity for hardware validation, for one example — but its largest / "primary" appeal is that you can set your computer doing interesting work, be part of a larger community doing the same, and hopefully learn something in the process. So, is it so bad that someone "[isn't] doing anything at all for GIMPS" when GIMPS, depending upon one's perspective, isn't doing much itself? Or to frame that more positively, levity is important; it's a fun project, so one should try to have fun with it.

In the second place, if you try to admonish someone for what they choose to run on their own computer, then they will turn off that computer and leave sooner than they will start running what you prefer instead.

[QUOTE=Batalov;595563]Never heard of it.[/QUOTE]

He is [URL="https://www.mersenneforum.org/showthread.php?t=22476"]coordinating an effort[/URL] to reach less than 20 million unfactored exponents in the mersenne.org database. He hopes to accomplish this by sub-dividing the database into many small "ranges," with an equal factoring goal for each, and then focusing on the ranges where PrimeNet's standard factoring protocols have not gotten (or probably will not get) to his goal automatically. This results in lots of work on small exponents (< 30 M, sometimes < 10 M) that were already LLed and DCed years or decades ago — in other words, something probably thoroughly useless from your point of view. The project had its first inkling in late 2017 and he projects that it might wrap up within 2022.

At the end of the day I'll simply defer to George Woltman — [URL="https://www.mersenneforum.org/showpost.php?p=577077&postcount=13"]I like factors[/URL] :smile:

Batalov 2021-12-18 06:08

First of all, you are preaching to the choir. I have seen more factors* than you will in your life. So if you count anyone who likes factors more than me, there would be very few. After all, if you cared, then you would have first found that I removed >2 million candidates from [I]this [/I]database that you keep talking about - singlehandedly - way before it became fashionable. But you can't, can you? You, too, are a writer, not a reader. Cool. Good for you. Carry on entering a community and giving lectures about how it should be.

Also, you must be hearing something that I'd never said. You are arguing with something that is inside your head. Deal with it. Where did I say who should compute something or should not compute something? I only said that those who don't think while doing it will repeat trivial mistakes that are obvious to most (or at least some) others, that's all. I warn against the "this must be easy, I just made up something and it looks true to me" attitude. This attitude also [URL="https://www.youtube.com/watch?v=rlC_z56rTik&t=153s"]leads to deep disappointments[/URL], and if you follow someone to the water you could avoid it. There is that mysterious Poisson that [I]xilman [/I]keeps bringing up (wth?), there is even some better statistics, confidence intervals, and more and more.

While in college, we had a popular (self-deprecating) anecdote.
"Scientists put a chimp in a cage, placed a tall plastic tree with a banana on top, and a stick. The chimp jumps, the chimp shakes the tree, but the banana doesn't fall. The scientists look at the chimp encouragingly and whisper to themselves, 'well, c'mon, think! think!' Then, the chimp pensively stops for a moment, then takes that stick and after a few tries, gets the banana.
Now the Scientists put a sophomore <from our college> in the same cage. The student shakes the tree vigorously for a few hours. The scientists look at him though the glass and gesture 'Think! Think!' The student yells: What is there to think about?! [B]I must shake the tree[/B]!!"

Every year we see this type of conversations, here:
[I]Person A[/I]: I see what you are trying to do and you are currently doing this wrong.
[I]Person B[/I]: how dare you tell me what to do and what not to do? I will leave this project and you will be very [B]very[/B] sorry! That will show you!

Remarkably constant.
__________
*and primes? Primes, too :rolleyes:

techn1ciaN 2021-12-18 14:41

[QUOTE=Batalov;595570]I have seen more factors* than you will in your life ... I removed >2 million candidates from [I]this [/I]database that you keep talking about - singlehandedly - way before it became fashionable.[/QUOTE]

Congratulations on having joined the project when lots of easy breadth-first TF was still available — not all of us can be so lucky. There's no need to diminish those who are doing the best they can with the factoring assignments that are currently available.

[QUOTE=Batalov;595570]You, too, are a writer, not a reader ... Carry on entering a community and giving lectures about how it should be.[/QUOTE]

I would never make any categorical statement. I am purely voicing my opinion and anyone is free to ignore it, just as they are free to ignore Mr. Viaene. If he truly is blundering headfirst into a losing endeavor (and he does not learn enough to rectify this), then he will either fail to recruit any participants or his participants will fail to see any results, and his project will fizzle out within a few months organically.

FWIW, I've been a GIMPS member for five years (although I'm sure that still means nothing to you), and I spent about a year lurking on the forum without an account before opening one. How recently someone joined here doesn't necessarily mean anything.

LaurV 2021-12-18 15:44

I can pee higher than all three of you!

Dr Sardonicus 2021-12-18 15:49

[QUOTE=techn1ciaN;595583]<snip>
I would never make any categorical statement.
<snip>[/QUOTE]
[quote][b]ALL:[/b] What, never?

[b]CAPTAIN:[/b] No, never!

[b]ALL:[/b] What, never?

[b]CAPTAIN:[/b] Well, hardly ever![/quote] -- [i]HMS Pinafore[/i] by Gilbert and Sullivan

Uncwilly 2021-12-18 19:22

I like factors. I have sent many actual CPU wall clock years only doing factors (out in the 332M range). I have also done mainly DC's for a while now. And I am doing PRP-CF. Etc. I think each of us can do work in multiple parts of the project, what ever excites us at the time. As is there is the primary goal (find the next MP, and maybe all of them). The secondary goal (confirm that we did not miss an MP). Then there are the tertiary goals (find factors for Mersenne numbers.) And then there are the non-Mersenne related projects hosted on the MF. There is no need for any one to :poop: on any of the other goals, projects, nor individuals involved. And unless your name is Lucas, Robinson, Slowinski, or Woltman, there is always someone else that has bigger bragging rights.

Many newbies in most forums go through a similar arc of enthusiasm/knowledge/hubris/etc. I think that several of the new folks in the last year have been fine additions to the community. The older members might get jaded in their views and need to take a breath and count to 127 before posting a tirade.

Batalov 2021-12-18 21:25

[QUOTE=techn1ciaN;595583]If he truly is blundering headfirst into a losing endeavor (and he does not learn enough to rectify this), then he will either fail to recruit any participants or his participants will fail to see any results, and his project will fizzle out within a few months organically.[/QUOTE]
If?

[QUOTE=petrw1;464177]Thinking out loud about getting under 20M unfactored exponents

Breaking it down I'm thinking ... <good!>

So I did some Excel ciphering looking at: <very good!>
- how many more factors are required in each range
- how many exponents need to be TF'd at the current bit level to get there (could require several bit levels to complete)
- how many GhzDays each assignment would take.[/QUOTE]
Good project, good goals. Steady and useful work. No talk of "world records" - but likes and makes others like factors. This is where world records really will come - as a byproduct, with a lot of diligent work.

[QUOTE=lisanderke;595271]World Record P-1!

I'd like to organize a thread to find World Record Factors with P-1 on small exponents.
I'll now refer to this project as WR P-1 (at the risk of it sounding too pretentious, perhaps...)
Similar to Petrw1's <2k (sub twok, <twok...)[/quote]
Pretentious? You said it, man.

Like I said, he is simply trying to reinvent Cunnigham's project (and only 2- portion of it and only with P-1), and had not put any thought into it. No resource planning. No proof of concept. No knowledge of what was done before. "Zimmermann? Who the hell is he?"
Worse of all - this is how one can successfully [I]slow down[/I] Wayne's project and the whole project that he says his is a sub-sub-project of.

"I want to be a ... lion tamer!"

P.S. The in-joke. Can you guess who was they last person who changed the thread's title (which is our old little game)? You'd be surprised and it actually makes so much sense once you find out :missingteeth:

lisanderke 2021-12-18 22:04

I realize now that it would have made more sense to name this thread '(Re-)doing P-1 on small exponents' (with 're-' in brackets since there are also small exponents that have not had P-1 done before). This was my original intention, doing P-1 on exponents that had not had much (if anything) done before. With this in mind, I do believe it makes sense to expect that any factors found have a good shot at being the biggest of their kind. Though, portraying it as if finding a "world record" was the goal is indeed not right. Or, as you put it eloquently, [QUOTE]No talk of "world records" - but likes and makes others like factors. This is where world records really will come - as a byproduct, with a lot of diligent work.[/QUOTE] I agree that it was wrong of me to give the thread its current title. You could have simply said that and suggested another name, if that is what you're bothered with.

Prime95 2021-12-18 22:06

[QUOTE=Batalov;595601]Pretentious? [/QUOTE]

or perhaps a simple marketing ploy to attract interest.

Think about it GIMPS does the same thing. "Join up - find world record prime, become famous!"
Sells a lot better than "Greatly increase your electricity bill for a miniscule chance of success finding a prime that few will care about"

Now, back to the topic at hand...
Next version of 30.8 has optimizations to make increasing a stage 1 bound faster (maybe 20 or 25%).
I'm testing a small improvement to stage 2 for big polys too - a modest 6.5% on the M79147 run I'm testing now (B1=1B, B2=20T)

Batalov 2021-12-18 22:30

[QUOTE=lisanderke;595603]I realize now that it would have made more sense to name this thread '(Re-)doing P-1 on small exponents' (with 're-' in brackets since there are also small exponents that have not had P-1 done before). ...
I agree that it was wrong of me to give the thread its current title. You could have simply said that and suggested another name, if that is what you're bothered with.[/QUOTE]
Good start. But why would anyone suggest a change of title if that title would contradict the contents of the thread. You don't suggest to rewrite the whole thread, I hope?
The title is the least of the problems. The title is a start.

[QUOTE=lisanderke;595603]This was my original intention, doing P-1 on exponents that had not had much (if anything) done before. [/QUOTE]
There is a problem hiding right here and you don't see it. P-1 (and very large ones) [B]were in fact done[/B] on them, - you simply don't know about that. There is a lot of folks (that not-to-be-mentioned too often) Paul Z had done tons, Ryan P had done much, SSW did monstrous tons of it. They keep their results [I]elsewhere[/I]. Why? Because inserting data into GIMPS database had not been easy for decades when they did that. (I know that from George, first hand.) So they didn't. So, you assume that work has not been done (but it had), and the rest of the arguments falls apart (looking at it from a different angle i.e. bayesian-statistically, your prior is way off, so your posterior* will be not what you expect). If you will collect your future results (I hope that you will continue what you started) - then the statistical analysis of then will show that very fewer than expected results will be found.

That is why I keep repeating like a broken record: you are excited about your new idea. I get it! Next step is do research, what has been already done? what could I possibly miss about what has been already done? "An ounce of research will save a ton of resources" / "“An ounce of prevention is worth a pound of cure."

There is a very well known trend in general (don't think just of GIMPS) - negative results are much harder to find in prior art. In science, people have trouble publishing negative findings - so they don't. In applied science, people might have tried something and have found nothing so that also stays only in their logs.
________
*I [I]swear[/I] that I am not swearing there. These are proper words.

jwaltos 2021-12-18 22:41

[QUOTE=Batalov;595607]Good start. But why would anyone suggest a change of title if that title would contradict the contents of the thread. You don't suggest to rewrite the whole thread, I hope?
The title is the least of the problems. The title is a start.


There is a problem hiding right here and you don't see it. P-1 (and very large ones) [B]were in fact done[/B] on them, - you simply don't know about that. There is a lot of folks (that not-to-be-mentioned too often) Paul Z had done tons, Ryan P had done much, SSW did monstrous tons of it. They keep there results [I]elsewhere[/I]. Why? Because inserting data into GIMPS database had not been easy for decades when they did that. (I know that from George, first hand.) So they didn't. So, you assume that work has not been done, and the rest of the arguments falls apart. If you will collect your future results (I hope that you will continue what you started) - then the statistical analysis of then will show that very fewer than expected results will be found.

That is why I keep repeating like a broken record: you are excited about your new idea. I get it! Next step is do research, what has been already done? what could I possibly miss about what has been already done? "An ounce of research will save a ton of resources" / "“An ounce of prevention is worth a pound of cure."

There is a very well known trend in general (don't think just of GIMPS) - negative results are much harder to find in prior art. In science, people have trouble publishing negative findings - so they don't. In applied science, people might have tried something and have found nothing so that also stays only in their logs.[/QUOTE]

Caustic wit with good content. A Scoville rating of 850k.
Re-reading certain threads (historical records a la Galaxy Quest) and posts and comparing them with the content of the newer ones gives a good recipe..sketch..outline..of a methodology to implement and then some. Hardware isn't everything though and solid theory* is prevalent/intrinsic within those posts.
As an afterthought, solidity based on mathematical proof rather than something like phlogiston. Dephlogistication is an ongoing exercise of mine.

VBCurtis 2021-12-19 00:35

[QUOTE=jwaltos;595608]Caustic wit with good content. A Scoville rating of 850k.
Re-reading certain threads (historical records a la Galaxy Quest) and posts and comparing them with the content of the newer ones gives a good recipe..sketch..outline..of a methodology to implement and then some. Hardware isn't everything though and solid theory* is prevalent/intrinsic within those posts.
As an afterthought, solidity based on mathematical proof rather than something like phlogiston. Dephlogistication is an ongoing exercise of mine.[/QUOTE]

What are you trying to say? This post seems devoid of content. What methodology do you have in mind that you refer to, to implement?

jwaltos 2021-12-19 02:52

[QUOTE=VBCurtis;595615]What are you trying to say? This post seems devoid of content. What methodology do you have in mind that you refer to, to implement?[/QUOTE]

"Caustic wit with good content. A Scoville rating of 850k.

Re-reading certain threads (historical records a la Galaxy Quest) and posts and comparing them with the content of the newer ones gives a good recipe..sketch..outline..of a methodology to implement and then some. Hardware isn't everything though and solid theory* is prevalent/intrinsic within those posts.
As an afterthought, solidity based on mathematical proof rather than something like phlogiston. Dephlogistication is an ongoing exercise of mine."

Perhaps it may seem devoid of content but that depends upon perspective. I'll explain myself to you:
Certain people within this forum are very good at expressing dry wit, implicit and explicit and sometimes at a level of abstraction greater than the originating post..ie data abstraction. I appreciate a good sense of humour as well the burn, hence my reference to Scoville. The Galaxy Quest reference was stated because it's a great parody rooted in interpretation.

On a serious note, you've been around this forum for awhile and you've seen posts by frmky, Gerbicz, xilman, Batalov, Greathouse, ewmayer, Sardonicus, LaurV. These are some names off the top of my head where their posts regarding hardware, firmware and software implementation, development and execution have helped me through the years. Ancillary subjects such as chess and astronomy have also interested me. Rogue's listing of science news, Kriesel's reporting on GPU's performance; Penne, Buhrow, JasonP have also made this forum a "must check" because of their excellent software. Many aspects of "number crunching" have been discussed from backdating hardware (virus proofing), back-dating operating systems (some awesome software just won't work on a newer linux realease) to CUDA and OpenCl code discussions which have helped me out here and there as well as advanced cooling mechanisms.
Nick's thread is always worthwhile and Sardonicus has helped me find a paper or two I couldn't locate online.

All of the above have helped me develop a methodology in approaching a question, understanding what that question is and appreciate what it is that I'm looking at. This forum helps people like myself do that, to develop an understanding and an appreciation of the "state" of a question and my (or yours for that matter) ability to resolve it. Phlogiston, N-rays, cold-fusion (there are books on these farces/delusions) as well as others on mathematical cranks and others on those with psychological issues.

The methodology that I have in mind is that you must be capable of proving whatever it is you state as fact, at the very least a concise empirical statement. This all begins with an awareness. This means learning everything you can about what the question requires you to know. This is a stepping stone to asking questions that may not have asked. This is difficult and requires a good "architect-like" imagination. I've worked on certain questions for decades..perhaps the mathematical concepts exist or perhaps not to solve such conundrums but I'll be damned if I'll be waiting patiently for something to be published while I'm still capable of thinking along those same lines.

Like a game of poker when the stakes become seriously "un-fun." Unless you're able to play the game as it should be played then be able take of yourself afterwards you shouldn't be in the game to begin with, by this I mean that a reputation is built on professionalism which is based on a track record. My track record isn't here. I hope I was able to convey my methodology as always being actively prepared and pro-active and to address any shortcomings immediately and thoroughly. Try skydiving where you need to cut away to your reserve chute or experience a regulator failure at depth while scuba diving. Things happen..scouts motto..be prepared.

I echo the sentiment below.

Zhangrc 2021-12-19 02:53

Enough, enough.
Remember [URL=https://www.mersenneforum.org/showpost.php?p=585535&postcount=31]this one[/URL].
What we should do now is to really find a factor to prove the project is feasible!

lycorn 2021-12-20 20:41

[QUOTE=lisanderke;595603]You could have simply said that and suggested another name, if that is what you're bothered with.[/QUOTE]

Seconded.

Prime95 2021-12-20 20:53

My first find for this project!

{"status":"F", "exponent":79259, "worktype":"P-1", "factors":["5397816232739964621072948142158502627401583"], "b1":1000000000, "b2":1000000000, "fft-length":4096}


All times are UTC. The time now is 17:13.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.