Factoring to 87 bits
I've been thinking that in order to revive this project a bit, it might be good to get candidates up to 87 bits, as there are currently six that are at 86 bits and according to mersenne.ca (link below) 87 is the optimal bit depth for these candidates. This will pave the way for an eventual P1 and PRP/LL test of these candidates.
[url]https://www.mersenne.ca/factorbits.php?exponent=3321930371[/url] But please let me know if any of this is incorrect, since I'm going off what the mersenne.ca and my scant/outdated knowledge. 
I am not sure that even 87 bits will eventually be considered the level that we want.
Have fun. May you slay a billion digit exponent with a factor. 
[QUOTE=clowns789;492556]I've been thinking that in order to revive this project a bit, it might be good to get candidates up to 87 bits, as there are currently six that are at 86 bits and according to mersenne.ca (link below) 87 is the optimal bit depth for these candidates. This will pave the way for an eventual P1 and PRP/LL test of these candidates.
[url]https://www.mersenne.ca/factorbits.php?exponent=3321930371[/url] But please let me know if any of this is incorrect, since I'm going off what the mersenne.ca and my scant/outdated knowledge.[/QUOTE] I'm afraid the project is not coordinated anymore... I took offline the site when we had more than six months without reservations. I don't think William is following it anymore. But if you want to offer your work to advance the bitlevel to 87, you are mostly welcome! Just post here your advancements, so that people like James at mersenne.ca can keep his records updated. Luigi 
[QUOTE=clowns789;492556]I've been thinking that in order to revive this project a bit, it might be good to get candidates up to 87 bits, as there are currently six that are at 86 bits and according to mersenne.ca (link below) 87 is the optimal bit depth for these candidates. This will pave the way for an eventual P1 and PRP/LL test of these candidates.
[url]https://www.mersenne.ca/factorbits.php?exponent=3321930371[/url] But please let me know if any of this is incorrect, since I'm going off what the mersenne.ca and my scant/outdated knowledge.[/QUOTE] Well it is at least not consistent with the bit levels shown on this subsite of mersenne.ca: [url]https://www.mersenne.ca/tf1G.php[/url] According to that page, wich holds records of TF bitdepth for all n>1000M to n<=2^32, the optimal bit level for TF for n=3321930371 is 91 bit. That is just how it looks now and it may very well be that these TF bit depths is subject for change in the future. But hey it is your ressources and you can do whatever you want and if you like to do n=3321930371 to 87 bits only, then thats your choice :smile: Happy hunting and TF :smile: 
Thanks all for the responses.
[QUOTE=KEP;492596] According to that page, wich holds records of TF bitdepth for all n>1000M to n<=2^32, the optimal bit level for TF for n=3321930371 is 91 bit. [/QUOTE] Interestingly, according to the following link, factoring to 91 bits would require over 150K GHzdays of computation, while a LL test requires only 91K: [url]http://www.mersenne.ca/exponent/3321930371[/url] Perhaps I'm misreading it, but that seems to imply that 91 would be too high, or one of the estimates is off, perhaps due to it being so far outside of normal assigned ranges. 
[QUOTE=clowns789;492620]Interestingly, according to the following link, factoring to 91 bits would require over 150K GHzdays of computation, while a LL test requires only 91K:
[url]http://www.mersenne.ca/exponent/3321930371[/url] Perhaps I'm misreading it, but that seems to imply that 91 would be too high, or one of the estimates is off, perhaps due to it being so far outside of normal assigned ranges.[/QUOTE] The figure of 91K is wrong. I think it is closer to 600K. James's site doesn't have P95 timing data for a FFT big enough to handle that exponent, and so uses timing from a smaller FFT, hence the discrepancy. If 87 bits is good enough for CPU TF, then 91 bits is correct for GPU TF. EDIT: Compare the LL GHDays for an exponent 1/10th the size: [url]http://www.mersenne.ca/exponent/332193019[/url]. An exponent 10 times the size should be [B]at least[/B] 100 times the effort, so 600K might actually be a conservative estimate. 
Thanks [B]axn[/B] for the explanation. I might have to wait to get a better GPU, or simply do the lower ranges.

[QUOTE=axn;492626]The figure of 91K is wrong. I think it is closer to 600K. James's site doesn't have P95 timing data for a FFT big enough to handle that exponent, and so uses timing from a smaller FFT, hence the discrepancy. If 87 bits is good enough for CPU TF, then 91 bits is correct for GPU TF.
EDIT: Compare the LL GHDays for an exponent 1/10th the size: [URL]http://www.mersenne.ca/exponent/332193019[/URL]. An exponent 10 times the size should be [B]at least[/B] 100 times the effort, so 600K might actually be a conservative estimate.[/QUOTE]I've obtained around p[SUP]2.1[/SUP] scaling for an assortment of software, PRP, LL, P1, over broad ranges of p. Applying that as a long extrapolation here, I get 622,082. GhzDays; a few years of a Radeon VII, if the software existed, to do one gigadigit PRP test. (3.34 years at Prime95's reported rate of 510. GhzD/day on linux & ROCm. Note that was at 5M fft, and I've seen some considerable throughput dropoff at larger fft lengths; of order half, on Windows.) An LL test should only be considered on large exponents if an initial PRP/GEC/Proof/Cert test sequence yield a probablyprime result. LL even with Jacobi check is simply too likely to have an undetected error in such large long computations. LL confirmation would probably best be done with different software and hardware and frequent comparison of interim residues. 
If implementing R Gerbicz' [URL="https://mersenneforum.org/showpost.php?p=523833&postcount=10"]method[/URL] of storing numerous residues to perform a correctness check of a gigadigit LL run, I estimate 40.5MB/100Mdigit x 10 x sqrt(3,321,9xx,xxx) or 405MB x 57636 ~ 23.TB disk space needed, which is a large but feasible array.
At 10[SUP]1.5[/SUP] lower for 100Mdigit, 0.7TB is much more manageable. 
Thanks [B]kriesel[/B] for the insight in the last two posts. I am interested to see if we can get a P1 or PRP test going on an exponent once we get one TFed to 91 bits. Perhaps I could be tempted to get a RTX 3090 and get it done in a semireasonable time frame! :smile:

[QUOTE=clowns789;557219]Thanks [B]kriesel[/B] for the insight in the last two posts. I am interested to see if we can get a P1 or PRP test going on an exponent once we get one TFed to 91 bits. Perhaps I could be tempted to get a RTX 3090 and get it done in a semireasonable time frame! :smile:[/QUOTE]I assume you're referring to the last TF with the RTX3090. Start lobbying Mihai and George now for a gigadigitcapable fft length in gpuowl, and robust error checking in P1, which I estimate would take a month to run on a Radeon VII, and start saving for a Radeon VII Pro for the PRP multiyear run.

All times are UTC. The time now is 05:27. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.