"Trial Factoring on Double Check Candidates"
I have a question about assignements on GPU72. When I get new ones, I have the option of getting "Trial Factoring on Double Check Candidates".
So, if I understand correctly this will give me exponents that have already been LLed once, but not DCed yet? And I'd be factoring at a higher bit level than what was done previously? If that's the case, why werent those exponensts factored before the exponent was LLed the first time? Aren't exponents TFed to a bit where the probability of finding a factor multiplied by time taken is smaller than the time it would take to LL? If so, did anything change from the time these were first TFed that makes it now worthwhile to TF to higher bit levels, even though we'll only be saving half the LL time (since it was already LLed once, and we're only saving the DC)? 
In very broad terms:
They were TF'ed to the level that was appropriate at the time using Prime95 and CPU's. We now have GPU's and TF software for them. This raises the bit level that makes sense. Yes, it would make some sense not to go as far in TF as if they were not yet tested. Most of the DC work is LL, but there is some PRP DC too. Some of the people doing the TF in the DC range are doing it to help close the gap between DC and the FTC ranges. So, they are willing to higher than might make sense. 
[QUOTE=ZFR;563742]I have a question about assignements on GPU72. When I get new ones, I have the option of getting "Trial Factoring on Double Check Candidates".[/QUOTE]
It's also related to this subproject I started a few years ago: [url]https://www.mersenneforum.org/showpost.php?p=464177&postcount=1[/url] I came up with this idea a few years ago when someone whimsically wondered on this forum if there would ever be less than 20,000,000 unfactored candidates here: [url]https://www.mersenne.ca/status/tf/0/0/1/0[/url] You can see we are now at: 20,754,134. At the time it was well over 21Million. So I extended the thinking like this: [QUOTE]Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result. Similarly if each 10M range has less than 200K unfactored... or each 1M range has less than 20K unfactored... or each 100K range has less than 2,000 unfactored.[/QUOTE] After some analysis I determined that to get every 100K range below 2,000 would require extra TF; but that alone would not get us there so some of us are also doing deeper P1 and ECM factoring. The GPUto72 project is helping out by making available "Trial Factoring on Double Check Candidates" [url]https://www.gpu72.com/reports/workers/dctf/[/url] 
OK, thanks both. So the biggest contributing factor (hehe) to the bitlevel increase was the CPU vs GPU TFing speed?

[QUOTE=ZFR;563759]OK, thanks both. So the biggest contributing factor (hehe) was the CPU vs GPU TFing speed?[/QUOTE]
Correct. Up to 100 times faster for TF. 
[QUOTE=Uncwilly;563746]
Some of the people doing the TF in the DC range are doing it to help close the gap between DC and the FTC ranges. So, they are willing to higher than might make sense.[/QUOTE] But isn't the optimal bit level actually made for this reason? Wouldn't it be quicker to close the gap by spending the cycles doing actual DC than TFing after you reach that bit level point? [QUOTE=petrw1;563760]Correct. Up to 100 times faster for TF.[/QUOTE] Gotcha. Thanks. 
The main reason for doing TF on DC candidates is to find factors, same as why we do ECM on very small exponents that are already doublechecked decades ago. For most exponents in the DC range, TF'ing for a few more bit levels is still the most efficient known method for factoring, as compared with ECM or a P1 with higher bounds.
Think of helping DC/closing the gap as some side product. If you only want to save the largest primality test time per factoring time spent, do TF (or P1) on first time check candidates. 
Until recently I was doing a lot of trial factoring on candidates that hadn't yet been double checked, since I like to direct my resources to double checking. In a sense I would be eliminating exponents before running LL on them.
But with the advent of PRP verification, it no longer makes sense to run a second LL on an exponent where there is only a single LL result: the PRP check will catch errors that LL won't, saving needing to make a third test, and the overhead of verifying the PRP run has been done is less than the cost of rerunning LL when there is a mismatch. So I primarily do LL double checks where there are mismatches. And the value there is finding out which hardware was bad, so all that hardware's results can be checked early. And I'll likely match one of the two results. Otherwise doing a fresh PRP run is more efficient. Because I'm primarily targeting mismatches, it doesn't make sense for me to bulk TF DC exponents to higher levels. And I was about the only one doing that, outside of Wayne's < 20M project. All of the higher ranges have fewer unfactored exponents than the goal of that projects. 
All times are UTC. The time now is 16:45. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.