- **15k Search**
(*https://www.mersenneforum.org/forumdisplay.php?f=16*)

- - **GROUP IDEAS**
(*https://www.mersenneforum.org/showthread.php?t=797*)

GROUP IDEAS:idea:
I thought it would be a good idea to start a topic on group ideas, rules, and procedures etc. For instance when exponents n, are tested up to (xxxxxx), can another member reserve any or all range/s of n? This is up to you the members. :( Since some of us may be testing large exponents in the future the technics should be worked out. Input is welcome, :banana: Shane F. |

I've assumed until your post that, when there's a (xxxxxx) to the right of the last prime, it means that the candidate is released by the member. Meaning, if anybody else is interested in investigating that number further, it is available.
Still, I think it is too early for anybody to think about reserving an unreserved range. There are tons of candidates yielding couple of primes. |

Yes it was the intention that(xxxxxx) was releasing the 15k.
But if a member does not give (xxxxxx) do they continue to reserve the entire 15k? [quote]Still, I think it is too early for anybody to think about reserving an unreserved range. There are tons of candidates yielding couple of primes.[/quote] Yes it is early, but we are looking through a small range of n, for computational reasons, which could hinder the full scope. Members could try to match or better the most frequent 15k. First we should agree what defines the most frequent, since the ranges of completed n vary it is hard for me to define. Currently candidates k, are less than a billion. The bound is 4+ billion. So you are right, it is early yet! |

group creditIn the new database, people do not loose credit for group searches!
Now Jocelyn, and Steve can fully join the group by emailing Chris caldwell caldwell@utm.edu Just ask him to add 15k, to your current prover code, for the primes found with us. :D NEW DATABASE http://primes.utm.edu/bios/top20.php?type=project&by=PrimesRank |

Odd nOdd n, are consistantly more frequent than even n! :(
The current ratio is about 164/96, in favor of odd n. People can reserve odd or even n, if they like. :D It would be nice if 6(164/96) ^1/2 = pi ? |

Some thoughts about future work and other things ...Within about two months our project has found the amazing number of 160+ titanic primes. :D
Thanks to all of you! Great work! There are a lot of things and problems which arise, when a project like ours is growing and gets "dynamical". So here are some thoughts, comments and ideas I have: Actually Shane and me are sieving the list of possible 15k's up to 4.3 billion (2^32), which is the upper limit for LLR (it will run about 3 times slower on larger 15k's). From that sieving we will get a lot of "heavy-weighted" candidates, waiting to be tested by all of us. The computed weights give you an idea on the number of primes you may expect, e.g. a 15k with a larger weight may produce more primes than a 15k having a smaller weight. But there's no guarantee that the largest weight will give the most primes! For example: The most frequent 15k up to now is 16995 for which we have found 10 primes for 110000<n<175000. But the weight for this candidate is only 3.08 (Nash weight = 5411). This is a quite low weight compared to our actual "almost"-4.0 candidates. Well, I haven't found any new primes for 15k=16995 in the range 175000<n<230000! This balances that low weight ... You see, for finding many primes there is also a bit of luck needed. :? Another thing is that some 15k's may be already tested by other people (non-group members). We try to check the candidates before we present them on our list. But we're all humans and therfore not free of errors. So please have yourself a look at Chris Caldwell's database and check the specific candidates you want to test, to avoid wasting you cpu power! (There may also be some really bad guys who take our list and test the candidates before we do - without beeing a member of our group ... :( any ideas on that topic?) Those of you thinking about re-reserving 15k's which were already tested by someone else should contact that group member. May be he or she has sieved that 15k up to much higher n but didn't have tested the whole range for primality, e.g. in most cases I use NewPGen to sieve for n up to 200000, but sometimes (when there are not so many primes) I stop LLR around n = 160000 or so. The last topic is on the speed of P4 vs. Athlon machines. I've found (and many others before me too) that NewPGen runs faster on Athlon's, while LLR is faster on the P4. Those of us, who have both types of machines available (I'm such a lucky guy :D) could run NewPGen on the Athlon and LLR on the P4. We could also think about splitting the sieving/LLRing between different people - those with an Athlon run NewPGen, those having a P4 run LLR. But then both should get credit on the primes found and therefore should have one common prover code. When we have found some "best" 15k's and decide to test for larger ranges of n we should think about some coordinated sieving (yes, to be done by those Athlon guys). Comments and other ideas are welcome, Thomas R. |

[quote]
some 15k's may be already tested by other people (non-group members). We try to check the candidates before we present them on our list[/quote] There is a very low probability in general that you will be reproducing others work, and I believe the new database will have a reject feature, so you'll know right away. [quote](There may also be some really bad guys who take our list and test the candidates before we do - without beeing a member of our group ... any ideas on that topic?) [/quote] We could consider them Lone 15k hunters. ;) Steve & Jocelyn ;) There really is no reason for them or others to poach since individuals do not loose or divide credit because of the projects, and programs used. But nevertheless, we will just have to use the data they produce as non-members towards our common goal. [quote]Those of you thinking about re-reserving 15k's which were already tested by someone else should contact that group member. May be he or she has sieved that 15k up to much higher n but didn't have tested the whole range for primality, e.g. in most cases I use NewPGen to sieve for n up to 200000, but sometimes (when there are not so many primes) I stop LLR around n = 160000 or so. [/quote] Ah yes the contradiction of decision, Since 16995 leveled out, maybe 160000+ will too. That is the beauty of this project I think, that we are generally increasing our odds, but locally we make interactive decisions(sometimes refered to as a gambler's falicy). Which is known to work when counting cards! As a matter of fact in a Las Vegas Nevada casino you will be kicked out if you count cards, though the laws states it is not illegal, because you are just using your mind and math. [quote]When we have found some "best" 15k's and decide to test for larger ranges of n we should think about some coordinated sieving (yes, to be done by those Athlon guys). [/quote] Yes indeed, we could consult Paul Underwood as well from 321 search for pointers. Our guestimated target remains at 2 years before Prime95 hits a ten million digit mersenne. |

[quote]Quote:
When we have found some "best" 15k's and decide to test for larger ranges of n we should think about some coordinated sieving (yes, to be done by those Athlon guys). Yes indeed, we could consult Paul Underwood as well from 321 search for pointers [/quote] A coordinated sieving effort is easily done with NewPGen -- there is a service to split across 'p' into as many ranges as you like. We stopped sieving at 14.7 trillion for 321 to one million. Had we known this was the limit at the outset of the project we could have run NewPGen on a single Athlon just to reduce the size of the save file and then split the file into 15 blocks each of which would have taken about 10 days on a 1.4GHz Athlon computer. There is a service within NewPgen to merge the sieved blocks. Just remember to set the maximum 'p' appropriately for each block. |

Thanks Paul :exclaim:
Your project is moving right along, and you are due for a big'n. We havent even answered your last large prime. Happy hunting, |

weight sievingI took a look at the Payam numbers again, and I am finding good weights quickly with 2805k.
I am sieving 2805k<999999990, for weights above 3.6. This leaves 3+billion open for sieving. Also I was thinking of making sub-weight categories, due to factors. This would include the worst, best , and average weight for the multiplier. For instance 2805k<28995285, has a best weight of 3.71, worst weight of .79, and average of about 2.5 So if you choose a 2805k, with a weight of 3.71 you know that if you had extremely bad luck you could get a .79, but are more likely to get 2.5, and most likely to get a 3.71. We could define an Mk weight, and then include a periferal effect of the factors of 15k, or M1k, M2k,... weights. 3k,5k,11k,17k,2805k Or look at it in Dirchlet terms. Any ideas? |

Increasing the speed of the projectI had a couple of ideas that can be used to increase the speed of this project.
1) I understand the numbers that give atleast 50 primes under 5000 are selcted. search for such primes can be easily be done using PFGW by writting a script. so no one has to go through couting the number of primes per candidates. :rolleyes: 2)once this is done, all the candidates can be written into a file. then using these candiates a simple C++ program can be wriiten which I am willing to write; that writes all the k's for 112000 in file 1 , k's for 112001 in file 2. This way newpen can sieve candidates for a fixed n. this kind of sieve will be much faster comapred to the method we currently use. :D Thanks! Harsh Aggarwal 8) 8) 8) |

Sounds like a pretty good idea!
You should contact Thomas R. and ask him about this. We are just starting to sieve multiple k, using Phil Carmody's sieve, so it is faster than a single k, but I think fixed n is pretty fast too. Thanks Harsh! :D |

LLR and ksieveHi,
Phil Carmoody's k sieve will work equally fast. I am not sure how you people are planning on using it with LLR, but if it will help I am willing to write a program that takes ksieve's ouput file and converts it into files compatible with LLR/newpgen for each k in the input file. Let me Know! Harsh Aggarwal 8) 8) 8) |

Citrix,
I'm actually sieving with ksieve2m - the multiple k version of ksieve - and found it much faster than NewPGen, when doing about 15 k in parallel. And if we go to k>2^31, where NewPGen needs k to be entered in factorized form and gets a lot slower than on smaller k, then ksieve is even more the better choice. There is already an option (-l) in Phil's abccreat.pl script to generate input files for LLR, though it is undocumented ... Phil sent me a modified version (abckcreate.pl), which can create one single ABC or LLR file from multiple del-files, where the candidates are sorted in increasing order. We are actually testing it and he will include that script into the next version of ksieve. If you want to incorporate your coding skills then this could be a project for you: We need a fast way to compute the weights (Nash and/or Brennen) of a large number of k values. At the beginning of the 15k project I used a simple VBS script, which submitted the value of k to PSieve and extracted the necessary information from the output and stores them in a text-file. The whole process is very slow, and PSieve can hanle k<2^31 only. The next steps were some modifications of Jack Brennen's Java applet (http://www.babybean.com/primes/ProthWeight.html) - a NashWeight applet and a stand alone program, which reads the k values from a file and writes both kinds of weights into another file. It can do k>2^31, but it is still not very fast. I thought about rewriting the whole thing in C or C++ using the GMP library, but actually I don't have the time to do that. So that could be your task, if you like :) Thomas R. |

deciding the best candidateHi,
I had a suggestion: Instead of looking at candidates that give 50 primes under 5000 we should also look at the number of proth tests that were preformed to get this result. the candidates that give the most primes and require fewer proth tests should be the best candiates because for large n we will have to preform the least proth tests and get the most primes. 8) Thanks, Citrix 8) 8) 8) |

calculating weightThomas,
In reply to your previous post, I have figured out a way to generate the top n number of candidates based on weight under a given range and given number of divisors, without testing every candiate indivisually. It would be better to generate the top 1000 candidates and then do the PRP on them to find the best candidate. let me know if you want me to find the top 1000 candidates. Citrix :cool: :cool: :cool: |

All times are UTC. The time now is 13:08. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.