mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lone Mersenne Hunters (https://www.mersenneforum.org/forumdisplay.php?f=12)
-   -   Coordination thread for redoing P-1 factoring (https://www.mersenneforum.org/showthread.php?t=23152)

ixfd64 2018-03-12 04:03

Coordination thread for redoing P-1 factoring
 
Some of us have been redoing P-1 on exponents with only stage 1 done. But because [url=http://mersenneforum.org/showthread.php?t=23110]it's not (always) possible[/url] to register such assignments, this creates the risk of stepping on toes. Therefore, I decided to create this thread to coordinate such factoring efforts. Feel free to share which ranges you're working on and any interesting factors you find. :smile:

I'll start: I have three machines that are redoing P-1 factoring:[LIST][*]A dual-core laptop working on exponents around 43.9m and 47.9m[*]A quad-core MacBook Pro working on exponents around 43.6m[*]A quad-core desktop working on exponents around 47.6m and 77.2m[/LIST]
All three computers are alternating between normal P-1 factoring and rerunning P-1 on exponents without stage 2 done.

petrw1 2018-03-12 17:31

Repost from the prior thread
 
For me:
- 50-59M range
- For any .1M range that has more than 1999 unfactored
- For exponents that the current P1 has B1=B2 (and not excessively large)
I am running PMinus1 with B1=1000000 B2=20000000 (1M,20M)

I expect to be at this all of 2018 but if anything changes....I'll post

chalsall 2018-03-12 17:53

I currently have four of my machines working in the 40M to 49M ranges (inclusive). They focus on a particular 0.1M range at a time. Currently they're working 45.7M, and will then focus on 44.2M (in about a week). I try to reserve them from Primenet so people see this activity (not always possible, since some of them have already had a DC).

For anyone who is interested, I'm letting mprime decide the bounds, based on four LL assignments being saved (doesn't make sense, I know, but it's my kit).

For 45.7M I've so far run 198 tests, and found 7 factors.

VictordeHolland 2018-03-12 18:04

Most of my unreserved P-1 effort was in the range 1.5M - 1.7M (B1=10e6 B2=200e6), which I'm currently running ECM on (B1=50,000).

Also I know Jocelyn Larouche is doing P-1 in the region below 4M.

ric 2018-03-12 23:23

I've two older machines slooowly doing P-1 on expos having B1<150k & B2<1M[LIST][*]in the range 12.2M to 12.4M[*]in the range 15M to 15.3M[/LIST]
just for fun...

ET_ 2018-03-13 11:03

I am doing P-1 testing from time to time, taking e xponents that have had poor or no stage 2 prefeŕring smaller ones.

ixfd64 2018-03-22 21:01

Update: I'm done with the 43.6 and 77.2m ranges for the time being. The MacBook Pro is now redoing exponents in the 44.1m range.

Chris: I see that you've reserved a few exponents in the 44.1m range as well. Are you planning to do more?

petrw1 2018-03-23 02:27

Opinions or observations please...
 
As you may have noticed I am on a P-1 binge recently.
I have full time P-1 running on:
2 x 2-core PC's
6 x 4-core PC's

Some have as much as 16GB of RAM; a couple only 4G.
I have noticed over the past years of doing P-1 that more RAM makes a big difference in Phase 2.

Simple observations have shown that running 480 relative primes in batch of 10 takes noticeably longer than that same run in batches of 120 for example.

(I wouldn't be surprised if the following has been noted before and I missed it)...

So that got me to thinking that, especially for the PC's with 4GB or 8GB of RAM it should complete more total tests per week if I ran 2 workers of 2 cores each rather than 4 workers with 1 core each.
The phase 1 may be slightly slower but the phase 2 should be enough faster that it more than makes up for it; faster because with only 2 workers fighting for RAM they each will get a lot more and can process more relative primes.

Opinions?

tha 2018-04-04 09:39

I work in the 10M to 25M range. Currently in the 11M range and the 22M range.

ric 2018-04-05 13:12

[QUOTE=ric;482194][LIST][*]in the range 12.2M to 12.4M[/LIST][/QUOTE]

324 cands, expected prob ~2.5%, 3 new factors (0.9%).
Meh!

Right now working on 12.0M to 12.2M, then will extend to from 12.4M to 13M

chalsall 2018-04-05 14:30

[QUOTE=ric;484376]324 cands, expected prob ~2.5%, 3 new factors (0.9%). Meh![/QUOTE]

Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.

When I'm redoing poorly P-1'ed work (read: no Stage 2) I tell mprime that the test will save four LL tests, and give it between 10GB and 12GB of RAM to use. Doesn't make sense, I know, but it's my kit and electrons. On average in the 4xM range I get about a 3% success rate.

ric 2018-04-05 15:53

[QUOTE=chalsall;484387]Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.
[/QUOTE]

Sure thing!

No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.

As a more general point, I've been playing this "filler game" for quite some time now, and I've as well experienced an average of 3-4% of successes on average. For sure, when TF levels were lower, 'twas much more rewarding :)

chalsall 2018-04-05 16:10

[QUOTE=ric;484392]No need to split hairs, since this is a filler job for me, but - using mersenne.ca's p-1 prob estimator - those cands had former B1/B2's corresponding to a prob level around 1.5%, and I brought them around 4% - hence my disappointment.[/QUOTE]

Ah... I now better understand your statement.

Statistics has no memory. Run an infinite number of tests and you should see about a 2.5% success rate... :smile:

petrw1 2018-04-15 04:16

[QUOTE=petrw1;483122]As you may have noticed I am on a P-1 binge recently.
I have full time P-1 running on:
2 x 2-core PC's
6 x 4-core PC's

Some have as much as 16GB of RAM; a couple only 4G.
I have noticed over the past years of doing P-1 that more RAM makes a big difference in Phase 2.

Simple observations have shown that running 480 relative primes in batch of 10 takes noticeably longer than that same run in batches of 120 for example.

(I wouldn't be surprised if the following has been noted before and I missed it)...

So that got me to thinking that, especially for the PC's with 4GB or 8GB of RAM it should complete more total tests per week if I ran 2 workers of 2 cores each rather than 4 workers with 1 core each.
The phase 1 may be slightly slower but the phase 2 should be enough faster that it more than makes up for it; faster because with only 2 workers fighting for RAM they each will get a lot more and can process more relative primes.

Opinions?[/QUOTE]

So after a month of trying P-1 with 2 workers of 2 cores each on a PC with 4.5G available to Prime95 I measured that the actual thruput has dropped by about 5%....I'm surprised but facts don't lie.

But then I got to thinking that when I have 2 workers instead of 4; not only do they each get twice the RAM per Stage 2...but (and I know almost nothing about this) ... I believe that at some point of extra RAM for Stage 2 Brent Suyama kicks in and my E=3 or E=6 or E=12 increases the odds of finding a factor.

Am I anywhere close on this...and can someone tell me how much of an increase the E=3 or E=6 or E=12 gives me? Maybe enough that I can swallow the 5% thruput loss.

Thanks

petrw1 2018-06-08 20:52

[QUOTE=chalsall;484387]Keep in mind that when you're redoing P-1 work you should expect a lower probability than what Prime95/mprime reports. As a rough guide, I subtract from what is reported as expected by what the previous run's probability was.

When I'm redoing poorly P-1'ed work (read: no Stage 2) I tell mprime that the test will save four LL tests, and give it between 10GB and 12GB of RAM to use. Doesn't make sense, I know, but it's my kit and electrons. On average in the 4xM range I get about a 3% success rate.[/QUOTE]

I have 28 cores doing full time P-1; all are re-do's in the 5xM range where the prior P-1 was Stage 1 only (B1=B2).

Subtracting the expected factor ratio of the prior run from my run comes out close to 3%.

After more than 5,100 of such tests my overall success rate is currently 2.98%

In the extremes I once saw consecutive Factors and six times 2 factors out of 3 attempts (on the low end)
AND 169 and 183 attempts between factors (on the high end).

My per-PC success rates range from 2.12% to 3.39% among PCs that have done more than 600.

petrw1 2018-11-08 01:56

I am now "reserving" P-1 work in the following ranges:

48.6
48.7
46.7
48.3
47.2

ixfd64 2019-06-16 23:47

I'm redoing P-1 on exponents in the [20m, 20.05m] range where B2 < 2,500,000.

petrw1 2019-06-17 03:29

Continuing P1 where B1=B2 in all 4xM ranges that have more than 2000 unfactored

ixfd64 2019-06-24 16:44

I have another PC running P-1 on exponents in the [57m, 57.1m] range.

masser 2019-09-26 23:37

I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.

ixfd64 2019-10-02 22:04

I'm currently redoing P-1 on exponents from 14.4m to 14.5m with B2 < 2,000,000.

masser 2019-11-13 15:10

[QUOTE=masser;526678]I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.[/QUOTE]

I will pause on the 2.8M range and resume on the 2.6M range. I'm losing a few credits here and there because a few people (probably not on the forum) are also taking exponents in these ranges for P-1 with large bounds.

After I complete my first pass over 2.6M, I plan to do a second pass over 2.6M and 2.8M. Anyone have a suggestion for how much to increase the bounds for a second pass of P-1 factoring? I was thinking that I would use the probability calculator at mersenne.ca; currently I'm putting in about 0.5 GhzD/exponent. For the second pass, I thought I would increase that to 0.75 GhzD/exponent and use bounds that approx. maximize the probability of finding a factor.

petrw1 2019-11-13 15:53

[QUOTE=masser;530466]I will pause on the 2.8M range and resume on the 2.6M range. I'm losing a few credits here and there because a few people (probably not on the forum) are also taking exponents in these ranges for P-1 with large bounds.

After I complete my first pass over 2.6M, I plan to do a second pass over 2.6M and 2.8M. Anyone have a suggestion for how much to increase the bounds for a second pass of P-1 factoring? I was thinking that I would use the probability calculator at mersenne.ca; currently I'm putting in about 0.5 GhzD/exponent. For the second pass, I thought I would increase that to 0.75 GhzD/exponent and use bounds that approx. maximize the probability of finding a factor.[/QUOTE]

I rely a lot on the probability calculator.
If you are a programmer you should consider getting the source and coding it yourself so it can process large batches of potential P-1 assignments and calculate the expected success rate and the GhzDays effort.

If you choose a few different percentages vs effort and graph them you will note that at some point it takes a LOT more work for a small percentage improvement.
I try to find good balance between expected factors found vs. GhzDays effort to find them.

In my experience multiple passes with increasing bounds is inefficient.
If you only use B1 then each successive Stage 1 run will continue where it left off (assuming you save the work files).
However, any time you increase B2 it runs the entire Stage 2 from the start whether or not you change B1.

I prefer to calculate the bounds that get the number of factors I want and use those bounds with one pass; starting with the exponents with the best odds.

masser 2019-11-13 17:55

Thanks, Wayne. That confirms what I was beginning to suspect about P-1 strategy. I think 2.6M will be very close to the 1999 unfactored goal after my initial pass is complete. If necessary, finishing that range with a second pass shouldn't be too difficult. I should be finished with 2.6M by the end of 2019.

2.8M will take a little longer with my measly i5, but I'll keep plugging away at it. The next pass over that range will be a little more strategic.

petrw1 2019-11-13 19:14

[QUOTE=masser;530494]Thanks, Wayne. That confirms what I was beginning to suspect about P-1 strategy. I think 2.6M will be very close to the 1999 unfactored goal after my initial pass is complete. If necessary, finishing that range with a second pass shouldn't be too difficult. I should be finished with 2.6M by the end of 2019.

2.8M will take a little longer with my measly i5, but I'll keep plugging away at it. The next pass over that range will be a little more strategic.[/QUOTE]

Cool!

:thumbs-up:

R.D. Silverman 2019-11-13 22:58

[QUOTE=masser;526678]I'm running P-1 with larger bounds on exponents in the 2.6M and 2.8M ranges.[/QUOTE]

It is, almost certainly, [b]not[/b] worth the effort. Compute, e.g. the conditional
probability of finding a factor given that the method failed with a prior given B1.

Extending P-1 just isn't worth the effort. It would be much better to run ECM
instead.

masser 2019-11-14 17:39

Dr. Silverman, thank you for the recommendation. I considered running ECM curves.

Here is an exponent I considered: [URL="https://www.mersenne.ca/exponent/2693501"]2693501[/URL]

P-1 has already been performed on this exponent with bounds, B1= 670000 and B2 = 6700000. The probability of finding a factor with those bounds was 0.03856. If I complete another P-1 calculation, with bounds, B1=2000000, B2=50000000, the probability of finding a factor (given current trial factoring limit, but assuming no prior P-1) will be 0.06294. I used the probability calculator [URL="https://www.mersenne.ca/prob.php?exponent=2693501&b1=2000000&b2=50000000&guess_saved_tests=&factorbits=68&K=1&C=-1"]here[/URL]. I estimate the conditional probability using subtraction (how bad is this?): 0.06294-0.03856 = 0.02438. The P-1 calculation will take 21 minutes on my machine and I expect to find a factor for every 41 (1/0.02438) exponents that I test with similar bounds. So, I expect to find 1 factor every 14.35 hours via P-1.

Each ECM curve for M2693501, at B1=50000 and B2=5000000, takes about 5 minutes on my machine. Completing 214 such curves will take about 18 hours. Would completing the 214 ECM curves at B1=50000 be comparable to trial factoring to 83 bits? That's how I came up with an estimate for the probability of finding a factor via ECM: 0.1139. With these crude estimates of the probability, I anticipate finding a factor via ECM in this range about once a week.

Experience has not born out the P-1 estimate above of one factor every 14 hours; it has been more like one factor every day. I note that there has been some ECM already performed on these exponents and I'm using very crude estimates. Please correct me if I'm doing anything wildly incorrect. However, at this point it appears that additional P-1 in the 2.6M range is more efficient at finding factors than ECM.

henryzz 2019-11-15 10:54

I believe that the conditional probability for the P-1 rerun should be (P(second) - P(first))/(1-P(first)). This assumes that the search space for the first run is completely contained within the second run(and is also quite possibly messed up by tf considerations). As such the search space is smaller for the second run. This provides a value of 0.03968 for your example 2693501.
These numbers do not take into account any ECM that has already been done. I am not sure what the average amount of ECM done so far on numbers this size is but 2693501 has 33 curves done. This is 33/280 of t25 and as such, there is a 1-e^(-33/280)=11.1% chance of any 25 digit factor having been found already by ECM. This will be higher for smaller factors of course.

Please feel free to correct my hastily done maths on this. It would be interesting to make a calculator that took all known work into account as well as the expected size of the factors and showed the probability of any new work finding a factor of x digits graphically. I might look into doing this if I can work out the formulas for P-1 and ECM probabilities.

R.D. Silverman 2019-11-15 18:04

[QUOTE=masser;530591]Dr. Silverman, thank you for the recommendation. I considered running ECM curves.

Here is an exponent I considered: [URL="https://www.mersenne.ca/exponent/2693501"]2693501[/URL]

P-1 has already been performed on this exponent with bounds, B1= 670000 and B2 = 6700000. The probability of finding a factor with those bounds was 0.03856. If I complete another P-1 calculation, with bounds, B1=2000000, B2=50000000, the probability of finding a factor (given current trial factoring limit, but assuming no prior P-1) will be 0.06294. I used the probability calculator [URL="https://www.mersenne.ca/prob.php?exponent=2693501&b1=2000000&b2=50000000&guess_saved_tests=&factorbits=68&K=1&C=-1"]here[/URL]. I estimate the conditional probability using subtraction (how bad is this?): 0.06294-0.03856 = 0.02438. The P-1 calculation will take 21 minutes on my machine and I expect to find a factor for every 41 (1/0.02438) exponents that I test with similar bounds. So, I expect to find 1 factor every 14.35 hours via P-1.

Each ECM curve for M2693501, at B1=50000 and B2=5000000, takes about 5 minutes on my machine. Completing 214 such curves will take about 18 hours. Would completing the 214 ECM curves at B1=50000 be comparable to trial factoring to 83 bits? .[/QUOTE]


You are comparing the wrong things.

I will accept your probabilities as correct. [your conditional probability computation
is not correct, however]. You need to divide by (1- P1)

Running P-1 with limits B1, B2 is the same as running a single elliptic curve.
[although the computations are simpler and hence faster] (one does need to adjust
the size of the candidate by log(exponent) because P-1 is always divisible by
the exponent.)

Running a second elliptic curve with the same limits will double the probability of
success. Increasing B1, B2 for P-1 does not double the probability of success
unless B1, B2 are [b]greatly[/b] increased.

You are comparing running P-1 with B1, B2 against running another ECM
curve with much SMALLER limits. And running ECM gives multiple independent
chances. P-1 does not.

Read my joint paper with Sam Wagstaff: : A Practical Analysis of ECM.

VBCurtis 2019-11-15 18:20

[QUOTE=R.D. Silverman;530678]You are comparing the wrong things.

You are comparing running P-1 with B1, B2 against running another ECM
curve with much SMALLER limits. And running ECM gives multiple independent
chances. P-1 does not.[/QUOTE]

Could you elaborate, specifically about which bounds for ECM will yield a better expected time per found factor than his P-1 work? The quoted part above seems to claim his choice of ECM bounds is too small to be a fair comparison, but your case to him is that he is wasting time doing P-1 when he should be doing ECM instead. What size ECM should he do that is more efficient than P-1?

Masser is comparing the rate of found factors by P-1 (at large bounds as stated) to the rate by ECM (at small bounds). If that's the wrong thing to compare, what is the right thing?

R.D. Silverman 2019-11-15 23:38

[QUOTE=VBCurtis;530683]Could you elaborate, specifically about which bounds for ECM will yield a better expected time per found factor than his P-1 work? The quoted part above seems to claim his choice of ECM bounds is too small to be a fair comparison, but your case to him is that he is wasting time doing P-1 when he should be doing ECM instead. What size ECM should he do that is more efficient than P-1?

Masser is comparing the rate of found factors by P-1 (at large bounds as stated) to the rate by ECM (at small bounds). If that's the wrong thing to compare, what is the right thing?[/QUOTE]

The data is all in my paper. Consider the following: P-1 to limits B1, B2 has been run.

Suppose we have the choice of (say) running P-1 again with (say) limits kB1, kB2
or running a elliptic curve with limits B1, B2. [for some k].

The latter choice will double the probability of success. The former will not. The former
will take [b]less time[/b], but the latter will give greater likelihood of success. The
former will allow more numbers to be tested in a fixed amount of time.

If we choose k to spend the same amount of time for extending P-1 or to run ECM
that latter should be more efficient

OTOH, if one places a BOUND on the time to be spent, i.e. one wants to spend
time T either running more curves or extending B1,B2 for P-1, it may be more
effective to extend P-1 because (as already noted) one must lower B1,B2 for the
elliptic curves because point addition on curves is much more expensive than simple
exponentiation.

I have not computed the effect that attempting 2^p - 1 has on P-1. The form of the
numbers gives that P-1 is always divisible by p. This effectively reduces the size
of the numbers that "must be smooth" by log(p). This may give a sufficient advantage
to make P-1 more effective when bounding the run time. Note that this advantage
disappears if one were attempting to factor numbers of no particular form.

If one is willing to spend an arbitrary amount of time on each candidate it is clear
that running multiple elliptic curves will be far more effective than raising the P-1
bounds, especially as the factors get larger.

masser 2019-11-16 18:22

Thank you all for the feedback. I have downloaded the Silverman-Wagstaff paper and will read it, but I will probably need to brush up on some of the background before I fully comprehend it. Thanks again.

masser 2019-11-28 06:16

New (to me) result feedback:

Splitting composite factor 72115741492408141371057158919540730748664584042639 into:
* 57330969015562354090032601
* 1257884573219624043651239

[URL="https://www.mersenne.org/report_exponent/?exp_lo=2645329&full=1"]https://www.mersenne.org/report_exponent/?exp_lo=2645329&full=1[/URL]

:huh:

c10ck3r 2019-11-28 09:54

[QUOTE=masser;531624]New (to me) result feedback:

Splitting composite factor 72115741492408141371057158919540730748664584042639 into:
* 57330969015562354090032601
* 1257884573219624043651239

[URL]https://www.mersenne.org/report_exponent/?exp_lo=2645329&full=1[/URL]

:huh:[/QUOTE]
And a juicy B-S factor at that. The smaller of the two would otherwise require B2=809M instead of the 50M it received. In fact, if you were running with that high of B2, you likely would've only found the larger factor, as it would be found in stage 1 with a B1 of 11M. Very cool how that worked out!

masser 2019-12-31 18:09

[QUOTE=masser;530494] I think 2.6M will be very close to the 1999 unfactored goal after my initial pass is complete. If necessary, finishing that range with a second pass shouldn't be too difficult. I should be finished with 2.6M by the end of 2019.

2.8M will take a little longer with my measly i5, but I'll keep plugging away at it. The next pass over that range will be a little more strategic.[/QUOTE]

SRJ2877 and I have gotten the 2.6M range below 2000 unfactored. I will now "unreserve" the 2.6M range. The last few factors were found by running ECM curves. I ran 8396 curves at the 25-digit bounds (B1=50K, B2=5M) and found 7 factors. This took 23.25 days, so about two factors per week for me.

I will now work on a second pass of P-1 factoring over the 2.8M range. After digging around the forum a little bit more, I found [URL="https://mersenneforum.org/showpost.php?p=403960&postcount=111"]this post[/URL] that recommended spending 5-6% of the ECM effort on P-1 factoring. Getting the 2.8M range below 2000 unfactored will likely require additional ECM work after this pass of P-1 factoring.

masser 2020-03-16 23:28

[QUOTE=masser;533846]SRJ2877 and I have gotten the 2.6M range below 2000 unfactored. I will now "unreserve" the 2.6M range.

I will now work on a second pass of P-1 factoring over the 2.8M range. [/QUOTE]

The 2.8M range now has less than 2000 unfactored. I will "unreserve" and move on to the 2.9M range.

masser 2020-04-04 21:25

2.9M now has less than 2000 unfactored. Pursuing the white whale of 14.0M next...

petrw1 2020-04-04 22:10

[QUOTE=masser;541800]2.9M now has less than 2000 unfactored. Pursuing the white whale of 14.0M next...[/QUOTE]

That's ambitious. Good for you.
I probably don't have to tell you that you will need aggressive B1/B2 values.

Enjoy.

masser 2020-05-23 14:31

I have the stage 1 P-1 savefiles for the factoring work I completed in the 2.6M, 2.8M and 2.9M ranges.

If someone were interested in further factoring the candidates in those ranges, the savefiles might save them quite a bit of work.

I'm certain this has been discussed on the forum in the past, but currently, there is no online repository for savefiles, right?

My machine is (slowly) running out of storage space, so I might have to begin deleting the savefiles. It would be nice to have a place to stow them for future interested parties.

petrw1 2020-05-23 15:19

[QUOTE=masser;546296]I have the stage 1 P-1 savefiles for the factoring work I completed in the 2.6M, 2.8M and 2.9M ranges.

If someone were interested in further factoring the candidates in those ranges, the savefiles might save them quite a bit of work.

I'm certain this has been discussed on the forum in the past, but currently, there is no online repository for savefiles, right?

My machine is (slowly) running out of storage space, so I might have to begin deleting the savefiles. It would be nice to have a place to stow them for future interested parties.[/QUOTE]

I feel your pain; I've got close to 30,000 P-1 save files from the last almost 3 years here in the 40M and 50M ranges. I've been saving them for the same reason but because of the aggressive bounds I used I think it is unlikely there would be value in furthering them.

I don't have the hard math but I think it would be more efficient to take an exponent with lower/mediocre bounds with no save file as a starting point than my aggressively P-1'd exponents with the save file.

masser 2020-05-23 16:57

[QUOTE=petrw1;546298]
I don't have the hard math but I think it would be more efficient to take an exponent with lower/mediocre bounds with no save file as a starting point than my aggressively P-1'd exponents with the save file.[/QUOTE]

Yes, that is true. We could make the case that when the really deep factoring efforts (like those for exponents less than 1M) reach the 40M range, technology will have advanced so far that our current efforts (and savefiles) will be trivial to reproduce.

The savefiles in the 2.6M range are a lot closer to some of the thorough factoring efforts. I don't know enough about those efforts to understand if the savefiles have any value to anyone.

:confused2:

kruoli 2020-05-27 11:14

[QUOTE=masser;546296]My machine is (slowly) running out of storage space, so I might have to begin deleting the savefiles. It would be nice to have a place to stow them for future interested parties.[/QUOTE]

I have spare storage - before deleting yours, would you mind sending them to me? I could figure out a way to make them publicly availible. How much data are we talking about?

petrw1 2020-05-27 15:47

[QUOTE=kruoli;546575]I have spare storage - before deleting yours, would you mind sending them to me? I could figure out a way to make them publicly availible. How much data are we talking about?[/QUOTE]

Do you want mine too? I have about 50,000 files of about 5K each.

kruoli 2020-05-27 16:20

Sure, thanks! Maybe this way we can start a central system.

tha 2020-06-01 20:03

I am doing some P-1 in the 15M range.

petrw1 2020-06-04 15:27

[QUOTE=kruoli;546606]Sure, thanks! Maybe this way we can start a central system.[/QUOTE]

What's the best way to send them your way?

kruoli 2020-06-04 15:38

I'd vote for SFTP, I already have that set up. If you are okay with this, I'll DM you some credentials.

petrw1 2020-06-04 16:32

[QUOTE=kruoli;547165]I'd vote for SFTP, I already have that set up. If you are okay with this, I'll DM you some credentials.[/QUOTE]

Ok...it may take a few days. I have several computers to collect from.

SethTro 2020-06-05 22:23

Semi Related I posted a [URL="https://www.mersenneforum.org/showthread.php?p=540022#post540022"]patch[/URL] to mprime that allows you to easily view the important information (B1, B2, percent complete...) from each savefile.

masser 2020-06-10 17:24

[QUOTE=petrw1;547167]Ok...it may take a few days. I have several computers to collect from.[/QUOTE]

Which files will you send? Not the *.bu or *.bu2 files, right?

petrw1 2020-06-10 21:01

[QUOTE=masser;547626]Which files will you send? Not the *.bu or *.bu2 files, right?[/QUOTE]

Correct those are just duplicates/backup save files

Just mXX999999

Ensigm 2020-10-30 16:48

Reservation, reservation…
 
"Reserving" all exponents in
- 54.30 M, 54.31 M and 54.32 M
that has B1=B2≤730,000 [B]and[/B] without a DC
for 30 days.

This shouldn't (and wouldn't) prevent them being assigned and/or done as DC, since they're not registrable. The "reservations" only apply to factoring——please kindly find another range, my fellow P-1 hunters.

S485122 2020-10-30 18:52

[QUOTE=Ensigm;561565]"Reserving" all exponents in
- 54.30 M, 54.31 M and 54.32 M
that has B1=B2≤730,000 [B]and[/B] without a DC
for 30 days.

This shouldn't (and wouldn't) prevent them being assigned and/or done as DC, since they're not registrable. The "reservations" only apply to factoring——please kindly find another range, my fellow P-1 hunters.[/QUOTE]But if they are assigned for a double-check, that user would lose his work if a factor was found... Please respect the reservation system.

Ensigm 2020-10-30 19:39

[QUOTE=S485122;561577]But if they are assigned for a double-check, that user would lose his work if a factor was found... Please respect the reservation system.[/QUOTE]
No, they will not. If they have already started the double-check, they wouldn't lose it, just that it wouldn't be so useful (to GIMPS). I'm not sure about the behaviour of mprime in the case that they haven't started the DC, but it will not unreserve any work that has been started.

Ensigm 2020-11-01 17:55

Since the previous range has less exponents without DC than expected, I'm now "reserving" for 30 days all exponents in
- 54.33 M, 54.34 M and 54.35 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

These "reservations" only apply to factoring.

Ensigm 2020-11-04 10:42

Cat 1 exponents with only Stage 1 P-1 are surprisingly rare. I'm now "reserving" for 30 days all the exponents in
- 54.36 M, 54.37 M, 54.38 M and 54.39 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

These "reservations" only apply to factoring.

S485122 2020-11-04 18:41

[QUOTE=Ensigm;561839]...
These "reservations" only apply to factoring.[/QUOTE]There is a reservation system ALSO for factoring. You choose to ignore it.
It seems to me you easily dismiss the work of others [QUOTE=Ensigm;561839]they wouldn't lose it, just that it wouldn't be so useful (to GIMPS)[/QUOTE]People work finding factors or doing LL or PRP tests. When one returns a LL or PRP result for a number that is already factored one doesn't receive credit.

Please respect AND USE the reservation system : it is available for factoring work. Just look at the different columns in the [url=https://www.mersenne.org/primenet/]Work Distribution Map[/url].

Jacob

Ensigm 2020-11-05 21:56

[QUOTE=S485122;562204]There is a reservation system ALSO for factoring.[/QUOTE]
I admit there is a small possibility that if I find a factor a few bits above the current TF limit, it is possible that it will clash someone else's TF assignment. But practically speaking TF assignments in this range are [URL="https://www.mersenne.org/assignments/?exp_lo=54300000&exp_hi=55000000&execm=1&exdchk=1&exfirst=1&exp1=1"]extremely rare[/URL] (see reason below too). Plus, I only work on exponents that are not assigned at the moment I checked, and turn them in rapidly. As far as I know, I have never actually resulted in poaching anyone else's factor (or even seen a single exponent that has been assigned as TF before I turn in my work).

It's actually quite a dilemma here: I can, for sure, manually reserve these as TF, then do the P-1, and manually unreserve them. This would eliminate the possibility of accidentally poaching anyone else's factor. But on the other hand, this (or reserving these exponents as TF in general) would prevent DC being done on them, and the general belief is that it should be discouraged, according to discussions at [URL="https://www.mersenneforum.org/showthread.php?t=26001"]this thread[/URL]. To conclude, there's no perfect way to do it, but at least I'm comfortable with what I am doing.

Uncwilly 2020-11-05 22:31

[QUOTE=Ensigm;562327]I admit there is a small possibility that if I find a factor a few bits above the current TF limit, it is possible that it will clash someone else's TF assignment. But practically speaking TF assignments in this range are [URL="https://www.mersenne.org/assignments/?exp_lo=54300000&exp_hi=55000000&execm=1&exdchk=1&exfirst=1&exp1=1"]extremely rare[/URL] (see reason below too). [/QUOTE]
Not so rare:
[url]https://www.mersenne.org/assignments/?exp_lo=53000000&exp_hi=54000000&execm=1&exdchk=1&exfirst=1&exp1=1[/url]

Ensigm 2020-11-05 23:06

[QUOTE=Uncwilly;562332]Not so rare:
[URL]https://www.mersenne.org/assignments/?exp_lo=53000000&exp_hi=54000000&execm=1&exdchk=1&exfirst=1&exp1=1[/URL][/QUOTE]
You're right. But for me to "successfully" poach a factor, the corresponding exponent needs to
[QUOTE]be assigned as TF in the short period (usually 2~7 days) between it is sent to my colab machine and is completed[/QUOTE]and
[QUOTE]have a factor within my P-1 bounds and only a few bits (usually one) above the old TF limit (~0.28% for 1 bit, ~0.54% for 2 bits)[/QUOTE]As I've noted, the first condition has never been met. When it has been met for 200 times, I expect less than one poach to happen.

S485122 2020-11-06 09:14

[QUOTE=Ensigm;562327]I admit there is a small possibility that if I find a factor a few bits above the current TF limit, it is possible that it will clash someone else's TF assignment. But practically speaking TF assignments in this range are [URL="https://www.mersenne.org/assignments/?exp_lo=54300000&exp_hi=55000000&execm=1&exdchk=1&exfirst=1&exp1=1"]extremely rare[/URL] (see reason below too). Plus, I only work on exponents that are not assigned at the moment I checked, and turn them in rapidly. As far as I know, I have never actually resulted in poaching anyone else's factor (or even seen a single exponent that has been assigned as TF before I turn in my work).

It's actually quite a dilemma here: I can, for sure, manually reserve these as TF, then do the P-1, and manually unreserve them. This would eliminate the possibility of accidentally poaching anyone else's factor. But on the other hand, this (or reserving these exponents as TF in general) would prevent DC being done on them, and the general belief is that it should be discouraged, according to discussions at [URL="https://www.mersenneforum.org/showthread.php?t=26001"]this thread[/URL]. To conclude, there's no perfect way to do it, but at least I'm comfortable with what I am doing.[/QUOTE]What I tried to say, but obviously failed to do, is that there is only ONE reservation system : if an exponent is reserved for TF it will not be available for PRP, DC, P-1, if it is reserved for P-1 it will not be available for TF, PRP, LL, DC, and so on... The thread you cite is not about people doing TF in CAT 0, 1 or 2 ranges, but about the fact that the reservation system should not assign exponents for TF in those ranges until the wavefront left them behind.

One doesn't poach a factor. Poaching is about submitting (any) work on exponents reserved by others.

If you absolutely want to work on exponents in those ranges, choose exponents that have been double checked in the DC range or those that had a first test (that was not suspicious) in the first time check range. (Of course it is easier and less error prone to use the reservation system.

If you care about GIMPS and the other "participants" please use the reservation system.

Jacob

Ensigm 2020-11-06 09:52

[QUOTE=S485122;562390]there is only ONE reservation system : if an exponent is reserved for TF it will not be available for PRP, DC, P-1, if it is reserved for P-1 it will not be available for TF, PRP, LL, DC, and so on...[/QUOTE]
Ah, I get what you mean. I think we have different philosophies, that's all. I'm not going to participate in this debate further.

Ensigm 2020-11-06 10:05

First half of 54.4M
 
I'm now "reserving" for 30 days all the exponents in
the first half of 54.4 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

These "reservations" only apply to factoring.



Previous ranges 54.30 M, 54.31 M and 54.32 M (B1=B2≤730,000) should have been finished by now, and I am "unreserving" them. If I have missed any exponent in these ranges, feel free to pick them up.

Ensigm 2020-11-07 13:39

New observation
 
1 Attachment(s)
[QUOTE=Ensigm;562327]I can, for sure, manually reserve these as TF, then do the P-1, and manually unreserve them.[/QUOTE]
[B] New observation today[/B]: I actually reserved a few exponents (via manual GPU assignment Page) as TF, and used their assignment ids for doing P-1 on the same exponent. Guess what? When the P-1 job finished, the assignment was unreserved as well, which means I don't have to unreserve them manually! It will even update my P-1 progress. (Attached screenshot: [I]Stage 1 of TF? That's kinda sus.[/I])

So the problem now boils down only to the reservation part. Manual reservation is largely impractical because the exponents are often not continuous, which means you have to reserve them one by one (and then manually paste the AIDs). If only I can find a way to automate these...

I don't think these expos should be available for TF. If PrimeNet stops giving TF assignments in these ranges, then both me and TF doers are rogue agents, and the concept of poaching won't apply. Or PrimeNet could give an "overlay" type of assignment that doesn't interfere with DC. But before the rules change, it seems the only method to theoretically avoid clashes with other TF doers is by using the TF assignment system itself (the down side being it might delay DC for a few days). Still, need a way to automate them before this becomes practical.

firejuggler 2020-11-07 14:44

currently running 32.1M and above, with a B1 of 500k and a B2 of 15M

Ensigm 2020-11-07 15:23

Previous regions 54.33 M, 54.34 M and 54.35 M (B1=B2≤730,000) are finished (not many exponents fit the criteria), and I am "unreserving" them. If I have missed any, be free to pick them up.

Ensigm 2020-11-10 10:22

Second half of 54.4M
 
Now "reserving" for 30 days all the exponents in
the second half of 54.4 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

Currently working on:
- 54.36 M, 54.37 M, 54.38 M, 54.39 M (B1=B2≤730,000)
- the whole 54.4 M (B1=B2≤730,000)

petrw1 2020-11-10 17:06

[QUOTE=Ensigm;562166]Cat 1 exponents with only Stage 1 P-1 are surprisingly rare. I'm now "reserving" for 30 days all the exponents in
- 54.36 M, 54.37 M, 54.38 M and 54.39 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

These "reservations" only apply to factoring.[/QUOTE]

In many of the 5xM, 4xM and 3xM ranges they will be rare because of the work I (and others) have been doing there the last 3 years working on the sub-2000 project here:
[url]https://www.mersenneforum.org/showthread.php?t=22476[/url]

In short the goal is to get all 0.1 Million ranges to below 2000 unfactored exponents.
For example here 39.3M and 39.6M are the ranges not below 2000.
[url]https://www.mersenne.ca/status/tf/0/0/4/3900[/url]

We achieve this by more bits of TF and deeper P-1 especially where B1=B2 or they are relative low.

Ensigm 2020-11-13 17:11

First half of 54.5 M
 
Now "reserving" for 30 days all the exponents in
the first half of 54.5 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

Currently working on:
- 54.36 M, 54.37 M, 54.38 M, 54.39 M (B1=B2≤730,000)
- the whole 54.4 M (B1=B2≤730,000)
- the first half of 54.5 M (B1=B2≤730,000)

Ensigm 2020-11-13 23:41

Previous ranges 54.36 M, 54.37 M, 54.38 M, 54.39 M (B1=B2≤730,000) should now be finished and I am "unreserving" them.

Report for the whole 54.3 M:
5 factors found in 81 attempts (6.17%). Based on the bounds that were used, expected probability would be between 3.78% and 3.81%.

Currently working on:
- the whole 54.4 M (B1=B2≤730,000)
- the first half of 54.5 M (B1=B2≤730,000)

Ensigm 2020-11-14 13:18

Second half of 54.5 M
 
Now "reserving" for 30 days all the exponents in
the second half of 54.5 M
that has B1=B2≤730,000 [B]and[/B] without a DC.

Note: this will be the last range I take in Cat 1 for a while. I will be going on a vacation shortly, and my maximum throughput will decrease by about 2/3 and will also become less constant. As a result, I will be moving away from smaller exponents so that those with higher throughput may take them.

Currently working on:
- the whole 54.4 M (B1=B2≤730,000)
- the whole 54.5 M (B1=B2≤730,000)

petrw1 2020-11-14 14:26

[QUOTE=Ensigm;563188]Now "reserving" for 30 days all the exponents in
the second half of 54.5 M
that has B1=B2≤730,000 [B]and[/B] without a DC.
[/QUOTE]

When you are done there could I interest you in bringing that kind of P-1 power to these ranges to help with my sub-2000 project:
48.4, 49.6
It's just over 1,500 exponents with similarly low B1=B2

And if that's not enough I can find more.

Thx

Ensigm 2020-11-14 15:14

[QUOTE=petrw1;563195]When you are done there could I interest you in bringing that kind of P-1 power to these ranges to help with my sub-2000 project:
48.4, 49.6
It's just over 1,500 exponents with similarly low B1=B2
[/QUOTE]

I will probably first go over all exponents under M48 with B1=B2<700,000 and no DC. That's gonna be at least until this year's end. Then I will either go into sub-2000 project or "regular" first-time P-1, or do both.

Also don't think that is lots of P-1 power. I'm currently just over 120 GHz-d/d and will soon go down to around 40 GHz-d/d. If you have over 1,500 exponents spread in 2*0.1M ranges I will probably take only 0.01M at a time.

Ensigm 2020-11-15 15:13

Second half of 55 M (B1=B2≤690,000)
 
Now "reserving" for 30 days all the exponents in
the second half of 55 M
that has B1=B2≤690,000 [B]and[/B] without a DC.

Currently working on:
- the whole 54.4 M (B1=B2≤730,000)
- the whole 54.5 M (B1=B2≤730,000)
- the second half 55 M (B1=B2≤690,000)

Ensigm 2020-11-17 15:53

First half of 56 M (B1=B2≤690,000)
 
Now "reserving" for 30 days all the exponents in
the first half of 56 M
that has B1=B2≤690,000 [B]and[/B] without a DC. Will also try to reserve them as TF assignments.

[STRIKE]Previous ranges 54.4 M (B1=B2≤730,000) should now be finished and I am "unreserving" them.[/STRIKE] Edit: one exponent 54482819 still remain WIP.

[STRIKE]Report for range 54.4 M (B1=B2≤730,000):
2 factors found in 40 attempts (5%).[/STRIKE] Expected probability would be 3.81%~3.83%.

Currently working on:
- the whole 54.4 M (B1=B2≤730,000)
- the whole 54.5 M (B1=B2≤730,000)
- the second half 55 M (B1=B2≤690,000)
- the first half 56 M (B1=B2≤690,000)

Ensigm 2020-11-19 14:32

Previous ranges 54.4 M (B1=B2≤730,000) should now be finished and I am "unreserving" them.

Report for range 54.4 M (B1=B2≤730,000): 2 factors found in 41 attempts (4.88%). Expected probability 3.81%~3.83%.

Currently working on:
- the whole 54.5 M (B1=B2≤730,000)
- the second half 55 M (B1=B2≤690,000)
- the first half 56 M (B1=B2≤690,000)

petrw1 2020-11-21 04:35

36.2M reserved for P-1
 
Thanks

Ensigm 2020-11-22 16:18

Previous ranges 54.5 M (B1=B2≤730,000) and the second half of 55 M (B1=B2≤690,000) should now be finished and I am "unreserving" them.

Report for range 54.4 M (B1=B2≤730,000): 0 factors found in 18 attempts (0%). Expected probability 3.81%~3.83%.

Report for range second half of 55 M (B1=B2≤690,000): 0 factors found in 12 attempts (0%). Expected probability 3.89%~3.92%.

Currently working on:
- the first half of 56 M (B1=B2≤690,000)

Ensigm 2020-11-24 13:38

Now intending to work on the exponents in the second half of 56 M that has B1=B2≤700,000 [B]and[/B] without a DC.
As most of the exponents will be in Cat 3 and higher, I will reserve every exponent as TF before I start my work. If you see an exponent that is not currently reserved in the Primenet system, then it simply means I am not working on it. Declaration here is more for the recording of progress.

Currently colloquially "reserved":
- the first half 56 M (B1=B2≤690,000)

Currently intending to do (will Primenet-reserve every exponent before I actually start working on it, so you don't have to avoid anything——there is no risk of me poaching your factor):
- the second half 56 M (B1=B2≤700,000)

Ensigm 2020-12-05 04:54

Report for the second half of 56 M (B1=B2≤700,000): 2 factors found out of 35 attemps (5.71%). Expected prob. 3.93%~3.94%.

Now intending to work on the exponents in the first half of 57 M that has B1=B2≤700,000 [B]and[/B] without a DC. I will reserve every exponent as TF before I start my work on it, so you don't have to avoid them.

Currently informally "reserved":
- the first half 56 M (B1=B2≤690,000)

Currently planning to do (will Primenet-reserve every exponent):
- the first half 57 M (B1=B2≤700,000)

ixfd64 2021-01-19 17:17

I'm finishing up the exponents in the 13m - 13.1m range where B2 < 1.9 million.


All times are UTC. The time now is 05:20.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.