mersenneforum.org (https://www.mersenneforum.org/index.php)
-   XYYXF Project (https://www.mersenneforum.org/forumdisplay.php?f=110)
-   -   Leyland Primes (x^y+y^x primes) (https://www.mersenneforum.org/showthread.php?t=19347)

 Uncwilly 2020-06-22 19:57

1 Attachment(s)
Just for safety I am dropping a snapshot of that file here as a zip.

 rogue 2020-06-22 22:10

This page, [url]http://www.primefan.ru/xyyxf/primes.html#0[/url], is not being updated. Where is the status of the search being maintained?

Is there a table with a list of all ranges that have been checked, who checked them, and when?

I'm asking because if someone wants to participate, they would have difficulty knowing where to start.

 pxp 2020-06-23 12:35

[QUOTE=rogue;548824]This page, [url]http://www.primefan.ru/xyyxf/primes.html#0[/url], is not being updated. Where is the status of the search being maintained?[/QUOTE]

Andrey Kulsha's effort is a snapshot of when it was last maintained (January 2017) and is unlikely to be updated. In terms of probable primes, the difference between it and my [URL="http://chesswanks.com/num/a094133.txt"]Leyland prime indexing page[/URL] is that, as of this moment, Norbert Schneider has added an additional 66 PRPs and I have added an additional 449. The L(x,y) PRP search is currently decoupled from its x/y dependency to the extent that size matters. Indexing is possible because all x^y+y^x of a given digit-size are being probed. For example, all Leyland primes smaller than 80965 decimal digits are now known. The cost of that knowledge is that the already-probed x's and y's are all over the place.

Still, one can say that once we have reached 81300 digits, x needs to be greater than 19000, or when we have reached 86025 digits, x needs to be greater than 20000, and so on. If my indexing effort reaches 105130 digits (which seems doable by 2022), x will need to be greater than 24000. That would be one thing to keep in mind if one wishes to participate. A second thing is that for larger x up to 50000, y has supposedly been checked up to 800. For larger x up to 500000, y has supposedly been checked up to 25. Just pick something. There's unlikely to be any duplication. If you know how to sort (x,y) pairs by L(x,y) digit-size, even better. You can pick a range of digit-sizes and contribute what you find. If you really want a challenge, find the next-larger PRP after Serge Batalov's L(328574,15). It will immediately become the largest-known Leyland prime.

 rogue 2020-06-23 13:59

[QUOTE=pxp;548870]The cost of that knowledge is that the already-probed x's and y's are all over the place.[/QUOTE]

That is my issue.

Are you saying that once you reach x = 19000 that all y < x have been searched for those x?

Is the intention to "not trust" what others have done and just retest all y < x for the range of x?

The table at the top of [url]http://www.primefan.ru/xyyxf/primes.html#0[/url] is really useful, but is useless since it is out of date. Why can't you just have a copy of that on your page and keep it up to date? It shouldn't be hard unless you are purely focused on "decimal length" as your goals instead of the goals of the original project.

If you are focused purely on "decimal length" goals does that mean that for each x you are searching to a different y? If so, are you even using xyyxsieve? Are you sieving then manipulating the output to "skip" x/y values larger than the decimal length you are searching for?

 pxp 2020-06-23 22:06

For me this all started in April 2015 when I computed a table of the first 5000 Leyland numbers (OEIS [URL="http://oeis.org/A076980"]A076980[/URL]). At the same time I realized that I could do something similar with the Leyland primes and thus contributed a table of the first 100 Leyland primes (OEIS [URL="http://oeis.org/A094133"]A094133[/URL]). I wondered how many of the then-1092 known Leyland primes were indexable and thus I extended my prior calculation of the first 100 Leyland primes to the first 954 Leyland primes. In May 2015 I used my result to curve-fit the Leyland number indices of those 954 Leyland primes and determined that the largest-known Leyland prime L(328574,15) would have a Leyland prime index of ~5550. That meant that there were thousands of [I]smaller[/I] Leyland primes waiting to be discovered. It also meant that because everyone was so focused on this x/y reservation scheme, I might be able to find new, relatively-small primes by examining the Leyland numbers [I]between[/I] existing Leyland primes. Thus began my Leyland prime hunt.

The basis of the hunt was my creation of a database of the first 331682621 Leyland numbers. That would include [I]all[/I] Leyland numbers up to 100000 digits. They were represented as (x,y) pairs and sorted by L(x,y) size. I would step through these pairs in Mathematica one at a time, testing each one for GCD[x,y]==1 before doing PrimeQ[L(x,y)]. That's it. No sieving and I never worry about where the x or the y are at. Yes, that's important when one is distributing a computation and need to keep track of who is doing what. I just never bought into that since I was pursuing my own goal of indexing the Leyland primes. And to that extent I feel good about what I have accomplished. Those initial 954 indexed primes are now 1567 indexed primes. Of those 613 added indices, 422 are for primes that I have discovered. I knew when I started this project that I might be stepping on some toes and for that I apologize. On the other hand, as I have tried to point out, there is nothing preventing folk from stepping out of their comfort zone and continuing the dance to a new beat.

 rogue 2020-06-24 02:31

Then the original search and yours are like comparing apples to oranges.

If someone wants to work on ranges for the original project, but not redo your work, then would have difficulty knowing where to start.

If you are not sieving, then you are probably wasting a ton of time trial factoring and PRP testing. What you can do is sieve with xyyxsieve, then use a script to pare done the terms that are outside of the decimal length you are searching.

 pxp 2020-06-24 11:08

The idea of using a script already makes it unlikely that xyyxsieve would be a fit for my lack of programming skills (I've used Mathematica for 24 years and I still have to regularly look stuff up). Are you saying that if I wanted to search for Leyland primes from just above Serge Batalov's L(328574,15) to, say, L(80332,71590) [64138108 Leyland numbers that run from 386434 digits to 390000 digits; I [I]have[/I] a 1 GB text file of the size-sorted (x,y) pairs] I would first have to use xyyxsieve to cull candidates from all Leyland numbers up to some incredibly large term and then throw out the ones that are less than 386434 digits or more than 390000 digits? How does one feed xyyxsieve so that it is guaranteed to include [I]every[/I] Leyland number candidate in that range? What might be the memory requirement to do that?

 rogue 2020-06-24 12:27

[QUOTE=pxp;548994]The idea of using a script already makes it unlikely that xyyxsieve would be a fit for my lack of programming skills (I've used Mathematica for 24 years and I still have to regularly look stuff up). Are you saying that if I wanted to search for Leyland primes from just above Serge Batalov's L(328574,15) to, say, L(80332,71590) [64138108 Leyland numbers that run from 386434 digits to 390000 digits; I [I]have[/I] a 1 GB text file of the size-sorted (x,y) pairs] I would first have to use xyyxsieve to cull candidates from all Leyland numbers up to some incredibly large term and then throw out the ones that are less than 386434 digits or more than 390000 digits? How does one feed xyyxsieve so that it is guaranteed to include [I]every[/I] Leyland number candidate in that range? What might be the memory requirement to do that?[/QUOTE]

You would have to know the range of x and y, to feed into the program. Let it run for an hour or two to thin out the range. I would expect that 75% or more of starting terms will be removed. This will obviously have terms with more or fewer decimal digits than what you need. If you do not know how to write a script, ask someone to write a script to cull the terms you don't want to test. There are a few people on this forum that have such a skill. Just ask and someone will likely step up to the plate for you. Take the culled file and use that as input to xyyxsieve to continue sieving until the average time to remove a term is close to the time it takes to do a PRP test.

Assuming you have multiple cores for testing, I suggest that you set up a PRPNet server. Once a server is set up this is by far the fastest way to distribute work across multiple cores and multiple computers. You don't need programming skill to run PRPNet. You will need to install MySQL or PostgreSQL and create a database. I can help you with that. It isn't as hard as you might think.

 pxp 2020-06-26 18:34

If this is just a matter of finding terms that I want to test, I can and have already done that in Mathematica. For example, there are only 17632 Leyland numbers greater than Serge Batalov's L(328574,15) that have exactly 386434 decimal digits. Selecting those that have GCD(x,y) = 1 reduces this to 10808 terms. Subjecting these numbers to Mathematica's PrimeQ (which is Mathematica's PRP test) for at most one second per term reduces our candidate list to [URL="http://chesswanks.com/num/LLPHbdl/386434.txt"]1324 terms[/URL]. PrimeQ has a built-in small-primes divisor test and I think one second per term is enough to run that part of it. Still, that requires about 4 minutes on my old 2013 Mac and I've been reluctant to apply it to the entire database of 64138108 Leyland numbers (386434 digits to 390000 digits). I did however create similar lists of [URL="http://chesswanks.com/num/LLPHbdl/386435.txt"]386435-digit terms[/URL] and [URL="http://chesswanks.com/num/LLPHbdl/386436.txt"]386436-digit terms[/URL] just as a proof of concept. The nice thing about the lists is that the entries are ordered by L(x,y) magnitude (every term is larger than its immediate predecessor).

For me the hard part is actually doing a full PRP test on the numbers. PrimeQ[L(85085,34812)] took over 6 hours on my main machine, albeit all 4 of its cores are busy on another project. I would not be surprised that there is software that can do this much faster. But lacking such, it would take me 11 months to test the entire first list. If my Mac-mini farm weren't already engaged, I could do it in a week. But that is one list. There will be 3567 such lists to get up to 390000 digits.

 xilman 2020-06-26 18:56

[QUOTE=pxp;549149]If this is just a matter of finding terms that I want to test, I can and have already done that in Mathematica. For example, there are only 17632 Leyland numbers greater than Serge Batalov's L(328574,15) that have exactly 386434 decimal digits. Selecting those that have GCD(x,y) = 1 reduces this to 10808 terms. Subjecting these numbers to Mathematica's PrimeQ (which is Mathematica's PRP test) for at most one second per term reduces our candidate list to [URL="http://chesswanks.com/num/LLPHbdl/386434.txt"]1324 terms[/URL]. PrimeQ has a built-in small-primes divisor test and I think one second per term is enough to run that part of it. Still, that requires about 4 minutes on my old 2013 Mac and I've been reluctant to apply it to the entire database of 64138108 Leyland numbers (386434 digits to 390000 digits). I did however create similar lists of [URL="http://chesswanks.com/num/LLPHbdl/386435.txt"]386435-digit terms[/URL] and [URL="http://chesswanks.com/num/LLPHbdl/386436.txt"]386436-digit terms[/URL] just as a proof of concept. The nice thing about the lists is that the entries are ordered by L(x,y) magnitude (every term is larger than its immediate predecessor).

For me the hard part is actually doing a full PRP test on the numbers. PrimeQ[L(85085,34812)] took over 6 hours on my main machine, albeit all 4 of its cores are busy on another project. I would not be surprised that there is software that can do this much faster. But lacking such, it would take me 11 months to test the entire first list. If my Mac-mini farm weren't already engaged, I could do it in a week. But that is one list. There will be 3567 such lists to get up to 390000 digits.[/QUOTE]Perhaps it is time for me to spend some more cpu on this project, for the first time in ten years or so.

Unfortunately I am busy factoring Generalized Cullen and Woodall numbers. Just one of those is expected to take me more than a year unless others help me. I no longer have the cpu power that I used to have.

 rogue 2020-06-26 20:14

pfgw will likely be faster since it is based upon the gwnum library. I suggest that you run and compare to the PrimeQ function of Mathematica. I do not have Mathematica, so I cannot compare its speed with pfgw. Since I have 2013 MacBook Pro, I'm guessing that pfgw is about 2x faster than Mathematica, given the information you have when using PrimeQ.

I'll make you an offer. Send me one of your lists and I'll sieve with xyyxsieve. I'll sieve to the appropriate depth and send the list back to you. That way both of us can see how much time it could save your project. If there is overlap between x and or y in the lists, send me all of your lists and I can see how xyyxsieve handles doing it all in one chunk.

All times are UTC. The time now is 03:22.