mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Conjectures 'R Us

Reply
 
Thread Tools
Old 2010-05-10, 23:57   #34
rogue
 
rogue's Avatar
 
"Mark"
Apr 2003
Between here and the

3×133 Posts
Default Suggestion

There are a lot of single k conjectures left. Far more than I had anticipated. I have a suggestion. This is what I am thinking:

1) Sieve each conjecture up to n=100K, 200K, or some other value. I would expect a few days of sieving should be sufficient for each conjecture.
2) Load those conjectures into a public PRPNet server.
3) Set up the server to order by n instead of by length.
4) Let 'er rip.
5) As new single k conjectures are found, sieve and add to the server.
6) Once n gets close to the leading edge, start sieving to a higher n.

I would expect that the possibility of proving one or more of these conjectures would interest many people. I think the PrimeGrid and NPLB users would be the most likely people to join the project. I expect that most of the primes that to be found would be in the Top 5000.

Note that PRPNet can handle hundreds of thousands of candidates in the server, presuming that the server doesn't have other processes stealing too much CPU. It can also handle dozens of concurrent users easily.

Thoughts?
rogue is offline   Reply With Quote
Old 2010-05-11, 01:16   #35
paleseptember
 
paleseptember's Avatar
 
Jun 2008
Wollongong, .au

101101112 Posts
Default

An excellent idea Rogue! I have no idea of the technicalities involved (I bow to everyone else on the planet's greater expertise), but you make it sound feasible.

Would there be issues of credit if a top-5000 prime is found?
Is the amount of work available sufficient to survive an onslaught from PrimeGrid/NPLB/others with hefty resources?
paleseptember is offline   Reply With Quote
Old 2010-05-11, 01:46   #36
MyDogBuster
 
MyDogBuster's Avatar
 
May 2008
Wilmington, DE

B2416 Posts
Default

Quote:
Is the amount of work available sufficient to survive an onslaught from PrimeGrid/NPLB/others with hefty resources?
There is more work to do here than Prime Grid could ever come up with. The problem attracting the Prime Grid people is that we don't and probably will never grant BOINC credits for work done. Some much for attracting Prime Gridders. Most people at NPLB already do work here also. JMTCW

Last fiddled with by MyDogBuster on 2010-05-11 at 01:47
MyDogBuster is offline   Reply With Quote
Old 2010-05-11, 03:16   #37
rogue
 
rogue's Avatar
 
"Mark"
Apr 2003
Between here and the

3·133 Posts
Default

Quote:
Originally Posted by MyDogBuster View Post
There is more work to do here than Prime Grid could ever come up with. The problem attracting the Prime Grid people is that we don't and probably will never grant BOINC credits for work done. Some much for attracting Prime Gridders. Most people at NPLB already do work here also. JMTCW
Not all PrimeGrid participants care about BOINC. I would expect that PRPNet users only over at PrimeGrid (and there are a few) are less likely to care than others. The easy thing for them is a simple change to their ini file and they can participate immediately.

Last fiddled with by rogue on 2010-05-11 at 03:17
rogue is offline   Reply With Quote
Old 2010-05-11, 04:11   #38
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

5×2,141 Posts
Default

Quote:
Originally Posted by rogue View Post
There are a lot of single k conjectures left. Far more than I had anticipated. I have a suggestion. This is what I am thinking:

1) Sieve each conjecture up to n=100K, 200K, or some other value. I would expect a few days of sieving should be sufficient for each conjecture.
2) Load those conjectures into a public PRPNet server.
3) Set up the server to order by n instead of by length.
4) Let 'er rip.
5) As new single k conjectures are found, sieve and add to the server.
6) Once n gets close to the leading edge, start sieving to a higher n.

I would expect that the possibility of proving one or more of these conjectures would interest many people. I think the PrimeGrid and NPLB users would be the most likely people to join the project. I expect that most of the primes that to be found would be in the Top 5000.

Note that PRPNet can handle hundreds of thousands of candidates in the server, presuming that the server doesn't have other processes stealing too much CPU. It can also handle dozens of concurrent users easily.

Thoughts?
We can consider that but a lot of administrative work involved. Care to coordinate it?

About NPLB users searching it. I'm not too fond of that. I'd actually rather have more people from here working at NPLB. It's much easier to administer and check and we already have a stats database set up.

Here, people have not consistently posted results so setting up a stats DB would only reflect a small percentage of work done; not to mention the tremendous amount of effort required to load them all.


Gary
gd_barnes is online now   Reply With Quote
Old 2010-05-11, 12:34   #39
rogue
 
rogue's Avatar
 
"Mark"
Apr 2003
Between here and the

3×133 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
We can consider that but a lot of administrative work involved. Care to coordinate it?

About NPLB users searching it. I'm not too fond of that. I'd actually rather have more people from here working at NPLB. It's much easier to administer and check and we already have a stats database set up.

Here, people have not consistently posted results so setting up a stats DB would only reflect a small percentage of work done; not to mention the tremendous amount of effort required to load them all.
I cannot set up a server that has public access, but I could coordinate the sieving.

PRPNet does have built in user and server stats, but you are probably thinking something different than what it offers.
rogue is offline   Reply With Quote
Old 2010-05-21, 21:43   #40
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

1070510 Posts
Default

All 1k bases <= 200 are now at n>=100K!

Let's see if we can extend that to bases <= 250.
gd_barnes is online now   Reply With Quote
Old 2010-05-23, 02:37   #41
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

5·2,141 Posts
Default

Quote:
Originally Posted by rogue View Post
I cannot set up a server that has public access, but I could coordinate the sieving.

PRPNet does have built in user and server stats, but you are probably thinking something different than what it offers.
Mark,

I haven't forgotten about this. I wouldn't want to do anything like this on a large number of bases starting from only n=25K. The coordination effort needed is too large vs. the moderate amount of effort it would take for people to individually bring such bases up to n=100K. Here is what I think the best thing to do is:

See how many bases <= 250 that we can get down to either 1, 2, or 3 k's remaining at n=100K. Max and I are working towards that right now with help from some others in the 1k and recommended threads. To get to that point, what we'll probably have to do is pick a "cutoff"; that is: For further testing, a base might have to have <= 7 k's remaining at n=25K or <= 5 k's remaining at n=50K or something like that to "see" if it will "make the cut" by getting down to 3 k's remaining at n=100K. You get the picture. For the time being, I am sieving and Max is testing many bases <=250 to n=100K that currently have 1 or 2 k's remaining at n=25K. Within a month or so, I'm thinking I'll step in and assist with testing bases that have from 3-7 k's remaining at n=25K. The idea being to get a comprehensive list of bases <=250 that have <= 3 k's remaining at n=100K for a potential large team drive on many bases such as this.

Once that is done, then I think that will be a (somewhat?) manageable size chunk of work to bite off and sieve what will be dozens of bases for n=100K-500K or n=100K-300K or whatever we decide (for such a large # of bases, the smaller range is probably better). That would be a very good team effort that you or others could coordinate if you would still like to do that.

You could decide how we'd subsequently load it into a server for testing. If you put them in one base at a time and it was set to test by size like you indicated, any new base loaded in would get priority until it catches up to the rest. I like that idea because it will take a lot of testing to bring even a single base from 100K to 300K and I would hate to see other bases "wait" for a single or just a few stubborn bases. I like it because it then becomes "NPLB style" where we search 100s of k's upwards by n instead of searching individual k's. In effect, the bases here are kind of like the k's at NPLB.

That said, there could also be side efforts like we have at NPLB. We could allow individual-base reservations initially for 5-10 of them where a single person might take one of the bases to some high n by himself. If the interest for them wasn't there, then we'd just dogpile those into the server and it would bring them up to where the others are at.

What's cool about this is that it could allow CRUS to ultimately compete with some of the larger non-BOINC projects at top-5000. As it is currently, we spend a lot of resources searching small ranges. This would help orient some of those resources towards higher ranges.

I think the key in doing all of this is to minimize the admin time to CPU time ratio. I think this has the best chance of accomplishing that.


Gary
gd_barnes is online now   Reply With Quote
Old 2010-05-23, 12:48   #42
rogue
 
rogue's Avatar
 
"Mark"
Apr 2003
Between here and the

3·133 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
Once that is done, then I think that will be a (somewhat?) manageable size chunk of work to bite off and sieve what will be dozens of bases for n=100K-500K or n=100K-300K or whatever we decide (for such a large # of bases, the smaller range is probably better). That would be a very good team effort that you or others could coordinate if you would still like to do that.

You could decide how we'd subsequently load it into a server for testing. If you put them in one base at a time and it was set to test by size like you indicated, any new base loaded in would get priority until it catches up to the rest. I like that idea because it will take a lot of testing to bring even a single base from 100K to 300K and I would hate to see other bases "wait" for a single or just a few stubborn bases. I like it because it then becomes "NPLB style" where we search 100s of k's upwards by n instead of searching individual k's. In effect, the bases here are kind of like the k's at NPLB.
I would load the DB with about 100,000 candidates (workunits) to start and then monitor performance as you add more candidates and as more clients connect. If the server isn't getting too bogged down, then start adding more candidates. You will have to ask yourself if you want to limit by max n or by max conjectures.

I would then set the sortoption to N, not decimal length. As you are well aware the different bases will yield vastly different decimal sizes for the same k and n. This will level out the leading edge of each conjecture so that they all reach the same n at roughly the same time. The downside is that as you load new conjectures, they will get all of the work until their n reaches the same levels as the other conjectures. The way to avoid this would be to load multiple conjectures at a time. If enough clients connect up to the server, then those conjectures should "catch up" to the others in the server in a few days.

So to sum it up, I would do the following:
1) Sieve to n = 500K.
2) Load candidates for multiple conjectures (3 or more) concurrently, but limit n to 200K, starting with about 100,000 candidates.
3) Set sortoption to N in the server.
4) As conjectures are knocked off and the server continues to perform well, load candidates for multiple new conjectures (3 or more) into the server.
5) When n gets close to 200K, load the next batch (n up to 300K, etc.).

This allows you to get the most conjectures into the database. As n increases, the testing time will grow quickly, so it will take longer and longer to get to each n level.

I think it will be exciting for people to knock off some conjectures and find Top 5000 primes at the same time.
rogue is offline   Reply With Quote
Old 2010-06-07, 03:49   #43
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

29D116 Posts
Default

Quote:
Originally Posted by MyDogBuster View Post
Added 166*43^n+1 I have it reserved to n=200K
Since you reserved all remaining unreserved 1k bases < 118 on both sides for n=100K to (at least) n=200K a while back, when you finish S43, you might check out the newer 1k remaining on S72 that is unreserved and at n=100K. That was a 2k one at n=25K that I took to n=100K a few weeks ago and found 1 prime for.
gd_barnes is online now   Reply With Quote
Old 2010-06-22, 18:33   #44
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

31×67 Posts
Default

Not being pushy but "sieve file available" has not yet been flagged for the following:
S259
S335
S341
S401
R334
R347
Just trying to avoid a(n unlikely) repetition of work.

Stop adding more 1k-ers! lol
Flatlander is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
P-1 factoring attempts at smallest-remaining Mersenne numbers with no known factors UberNumberGeek Factoring 51 2017-02-13 20:30
20 Easy Pieces - The Remaining 29-bit Jobs swellman XYYXF Project 5 2016-02-27 22:35
Discussion about CPU time needed and k's remaining Siemelink Conjectures 'R Us 41 2008-07-11 23:05
Easiest Remaining Cunninghams R.D. Silverman Factoring 1 2008-03-12 03:34
distribution of remaining candidates thommy 3*2^n-1 Search 41 2004-04-11 22:05

All times are UTC. The time now is 04:27.


Wed May 18 04:27:44 UTC 2022 up 34 days, 2:29, 0 users, load averages: 1.32, 1.60, 1.62

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔