mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2019-04-12, 12:10   #1
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

3×311 Posts
Default Low range P-1, should NF be submitted?

I'm doing some very low range P-1 testing on 344M exponents TF'd to 72 bits. Is it worth submitting the NF results? Could it harm the way work is doled out if a poor P-1 attempt is in the DB? I know the recommended P-1 settings are to save two PRP tests, I'm experimenting with very low P-1 on a Radeon VII which relative to a comparable nvidia card is much better at P-1/PRP and much worse at TF.
M344587487 is offline   Reply With Quote
Old 2019-04-12, 12:21   #2
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

5A516 Posts
Default

IMO go ahead and submit them. The server (i.e. the task allocator) can always implement logic to ignore them if the bounds are low, and hand the exponent out again for another P-1 if worth it.

Last fiddled with by preda on 2019-04-12 at 12:22
preda is offline   Reply With Quote
Old 2019-04-14, 23:27   #3
Madpoo
Serpentine Vermin Jar
 
Madpoo's Avatar
 
Jul 2014

5·677 Posts
Default

Quote:
Originally Posted by M344587487 View Post
I'm doing some very low range P-1 testing on 344M exponents TF'd to 72 bits. Is it worth submitting the NF results? Could it harm the way work is doled out if a poor P-1 attempt is in the DB? I know the recommended P-1 settings are to save two PRP tests, I'm experimenting with very low P-1 on a Radeon VII which relative to a comparable nvidia card is much better at P-1/PRP and much worse at TF.
Submit them please.

The server will generally reassign an exponent for more P-1 work if the bounds on any prior test are small enough (I think it was something like at least B1=100,000)

I'm currently doing a bunch of small P-1 tests on exponents that are assigned for LL work but don't have any P-1 done at all on them. I'm mostly testing to either B1=100,000 and no B2, or in some cases for REALLY old assignments, just doing a small B2=150,000.

The point of what I'm doing is to weed out a few of those tests, most of which were long ago abandoned, by finding a factor here and there.

Even though my bounds are low, I have managed to find a handful of factors out of a few hundred tests. I've got an old server with a ton of RAM so it seems suited for that kind of work.
Madpoo is offline   Reply With Quote
Old 2019-04-15, 08:03   #4
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

3·311 Posts
Default

Sounds good. I'm using really low bounds of B1=18676 and B2=224112 on 344M exponents TF'd to 72 bits, tuned to find the most factors. Currently 15/1250, ~3.84 factors per day which seems good but I could be wrong.
M344587487 is offline   Reply With Quote
Old 2019-04-15, 11:55   #5
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

5·172 Posts
Default

Quote:
Originally Posted by Madpoo View Post
I'm mostly testing to either B1=100,000 and no B2, or in some cases for REALLY old assignments, just doing a small B2=150,000.
If you spend the effort of doing the stage-1, it is worth to continue it with a little bit of stage-2 as well. e.g. if you do B1=100'000, continuing to B2=1M might be worth it.

Or, in other words: if you have a fixed amount of compute time to allocate to an exponent [for P-1], it's better spent doing lower-B1 and B2 rather then only higher-B1 for the same total time.
preda is offline   Reply With Quote
Old 2019-04-15, 12:02   #6
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

101101001012 Posts
Default

Quote:
Originally Posted by M344587487 View Post
Sounds good. I'm using really low bounds of B1=18676 and B2=224112 on 344M exponents TF'd to 72 bits, tuned to find the most factors. Currently 15/1250, ~3.84 factors per day which seems good but I could be wrong.
IMO you could bump up the bounds a bit, without seeing a too-large decrease of factors/day. And with a big increase in the percentage factored. In fact, that comparison would be an interesting experiment :)

I guess that a large ratio being factored in first-stage would be an indication that B1 is too high :)

I'm trying to optimize the implementation in GpuOwl to make it more efficient for extremely-short P-1 tasks (on the order of minutes). E.g. the background stage-2 GCD is a big gain there.
preda is offline   Reply With Quote
Old 2019-04-15, 14:47   #7
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

16458 Posts
Default

Quote:
Originally Posted by preda View Post
IMO you could bump up the bounds a bit, without seeing a too-large decrease of factors/day. And with a big increase in the percentage factored. In fact, that comparison would be an interesting experiment :)
Sounds like fun. Attached is the table used to get to this point based on the mersenne.ca calculator and optimised for R7 with testing. I'll expand the timings to larger bounds and once an interesting bound is found I'll run it for a few days to compare.

Quote:
Originally Posted by preda View Post
I'm trying to optimize the implementation in GpuOwl to make it more efficient for extremely-short P-1 tasks (on the order of minutes). E.g. the background stage-2 GCD is a big gain there.
To that end how about two simultaneous P-1 runs using half the available RAM each? It's my limited understanding that any given test with less RAM still tests the same factors with the same outcome but you lose a little in efficiency (?). That could more than be made up for by the ~8% throughput gain of running simultaneously, and you potentially gain further by allowing one worker to go quicker when the other is in a prep phase (if that cannot be fully eliminated?).
Attached Files
File Type: pdf r7.pdf (32.2 KB, 150 views)
M344587487 is offline   Reply With Quote
Old 2019-04-15, 22:27   #8
Madpoo
Serpentine Vermin Jar
 
Madpoo's Avatar
 
Jul 2014

5×677 Posts
Default

Quote:
Originally Posted by preda View Post
If you spend the effort of doing the stage-1, it is worth to continue it with a little bit of stage-2 as well. e.g. if you do B1=100'000, continuing to B2=1M might be worth it.

Or, in other words: if you have a fixed amount of compute time to allocate to an exponent [for P-1], it's better spent doing lower-B1 and B2 rather then only higher-B1 for the same total time.
I'll check it out. In all honesty I was trying to blaze through the couple thousand of exponents that are assigned without any P-1 work done, in the minimum time possible. Maybe when I get through them all I can go back and do higher bounds (I'm doing it by the "date assigned", and moving forward in time - I'm midway through 2018 right now).

In other words, I was aiming for some extremely low hanging fruit. Most of these are assignments that were checked out years and years ago and never checked in again with any progress. But because they're large enough, they fall into assignment limbo where they don't expire, but probably should just in case someone else had a hankerin' to test that particular 100M digit exponent. Poor man's poaching.
Madpoo is offline   Reply With Quote
Old 2019-04-15, 22:40   #9
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

1108810 Posts
Default

Quote:
Originally Posted by Madpoo View Post
I'll check it out. In all honesty I was trying to blaze through the couple thousand of exponents that are assigned without any P-1 work done, in the minimum time possible. ... Poor man's poaching.
Just a thought... Perhaps the "Powers that Be" at Primenet could consider implementing some assignment expiry rules up in 332M?

Personally, I think those working up there are wasting their time (at least, on LL'ing). But some seem drawn to the $100,000 prize.

Much like a lottery; a tax on those bad at the maths...
chalsall is offline   Reply With Quote
Old 2019-04-19, 20:23   #10
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

93310 Posts
Default

Quote:
Originally Posted by preda View Post
IMO you could bump up the bounds a bit, without seeing a too-large decrease of factors/day. And with a big increase in the percentage factored. In fact, that comparison would be an interesting experiment :)

I guess that a large ratio being factored in first-stage would be an indication that B1 is too high :)

I'm trying to optimize the implementation in GpuOwl to make it more efficient for extremely-short P-1 tasks (on the order of minutes). E.g. the background stage-2 GCD is a big gain there.
After some experimentation I've come to these conclusions that apply directly to 344M TF'd to 72 bits on Radeon VII, but may generally apply to some degree across the board:
  • It's not worth it to systematically do low P-1 as a pre-step, unless possibly there's some low P-1 and TF intermingling not considered here. At best you can roughly break even with two P-1 phases compared to one P-1 phase done right, and that's only if the first phase uses settings to maximise factors found
  • The bounds given by "save x LL" of the calculator are slightly inefficient (higher B1 and lower B2 for a given % factor chance than is optimal). The best way to use the calculator is to do the "save x LL" setting, then feed the resulting factor chance back into the calculator to get optimal bounds. That shaves ~10 minutes from a ~312 minute P-1 test (at save 1 LL setting), not insignificant
  • The best setting to minimise total time per exponent (considering that any NF will have PRP applied) is a bit beyond the optimised "save 1 LL" bounds. The setting is B1=1764925 B2=52947750, a B2 multiplier of 30. It's a much finer saving of ~10 minutes per exponent which might as well be margin of error when considering per exponent time is ~317 hours
  • I tested using higher B2 bounds than the calculator suggested to see if it was more optimal for the R7 but it wasn't. The calculator tends to underestimate the GHzDays a little which throws off what would be considered optimal if you used those figures but otherwise seems fairly accurate
  • If low bounds P-1 has been done it's still worth doing a second high-bounds pass with B1=1764925 B2=52947750. The point below which it makes sense to do more P-1 (for 344M TF'd to 72 bits) is low bounds P-1 with below ~5.5% chance of finding a factor, which corresponds to B1=639470 B2=15347280
  • Midrange bounds waste the most time compared to doing optimal bounds in the first place or doing really low bounds as a pre-step
Attached Files
File Type: pdf r7.pdf (39.7 KB, 165 views)
M344587487 is offline   Reply With Quote
Old 2019-04-19, 21:42   #11
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

24×32×7×11 Posts
Default

Quote:
Originally Posted by M344587487 View Post
After some experimentation I've come to these conclusions
I hope this is taken the way it is intended.

There is a lot to be learned from failure. For most of us, it is actually how we learn the most.
chalsall is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Crazy spike in LL tests submitted Uncwilly PrimeNet 19 2019-04-11 19:13
Manually submitted P-1 results give more credit. Mr. P-1 PrimeNet 1 2011-10-24 14:17
New paper from Wieb Bosma submitted to arXiv.org schickel Aliquot Sequences 4 2010-01-05 18:37
OK, how can we get a range now? thechickenman Lone Mersenne Hunters 4 2008-12-01 10:45
Have I submitted bad data? (Or, well, about to anyway) Nazo Software 5 2005-08-06 05:44

All times are UTC. The time now is 12:21.


Tue Feb 7 12:21:02 UTC 2023 up 173 days, 9:49, 1 user, load averages: 1.87, 1.50, 1.30

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔