20091230, 19:47  #1 
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
6,011 Posts 
P1 B2 time estimates
P1 B1 seems to be rather linear in the time to run.
1e9 takes ~10x 1e8 However, P1 B2 isn't at all linear in the time it takes to run. Is there any relatively accurate way of estimating the length of time a certain B2 will take based on lower times? I want to work out the estimated workdone for different B1 and B2 combinations to find what the most efficient combination is. 
20091230, 23:09  #2 
Einyen
Dec 2003
Denmark
3413_{10} Posts 
I went abit nuts awhile back with Excel regression analysis. Post #4 and #8 here:
http://www.mersenneforum.org/showthread.php?t=9658 I also found that step1 time is linear function of B1, but step2 time is proportional to about B2^{0.6}. 
20091230, 23:30  #3 
Jun 2003
3^{4}×67 Posts 
You could just ask akruppa. He wrote the implementation.

20091231, 00:10  #4 
"Robert Gerbicz"
Oct 2005
Hungary
1,597 Posts 
It's O(sqrt(B2)), there can be other log factors in the formula.

20091231, 11:07  #5  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
177B_{16} Posts 
Quote:
I will keep the results there in mind though and make suitable alterations if necessary. Hence why i posted in his little corner of messenneforum. Quote:
A B2 from 1e82e8 took 0.26 seconds A B2 from 1e102e10 took 2.6 seconds. A B2 from 1e122e12 took 32 seconds. A B2 from 1e142e14 took 365 seconds. In theory each step here should be 10x longer than the previous one. It starts off well but losses it after a while. Maybe the log factors you mentioned explain that. The pc wasnt completely idle while doing these benchmarks although it shouldnt have influenced much. I will redo them on a completely idle pc at somepoint soon. 

20091231, 11:19  #6  
Oct 2004
Austria
2482_{10} Posts 
Quote:


20091231, 11:53  #7 
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
6,011 Posts 
i dont think there was any ramscaling done in any of those tests

20091231, 14:22  #8 
"Nancy"
Aug 2002
Alexandria
2,467 Posts 
Not an easy answer (yet again) I'm afraid. Building polynomial from roots, computing its inverse (I think) and the polyeval algorithm all take asymptotically O(n log(n)^2 loglog(n)), where n is the polynomial degree (usually in the millions). Computing the roots takes O(dS^2), where S is the degree used for the BrentSuyama extension (e.g., 30). Reducing a polynomial modulo another is basically just a multiplication, so O(n log(n) loglog(n)). With k blocks, k+1 polynomials are built from their roots (F, and the k pieces of G), k1 modulo reductions, if k>1 the inverse is computed, and the polyeval at the end. So yes, the time does depend on memory or more specifically, on the polynomial degree dF and the number of blocks k that you use.
If you want to try curve fits, do separate fits with contant k. Alex 
20091231, 17:51  #9  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
6,011 Posts 
Quote:
i will fix it in my next trials 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
GNFS estimates  10metreh  Factoring  48  20090408 01:54 
Chebyshev's Estimates  brownkenny  Math  2  20090122 17:21 
Msieve QS estimates  henryzz  Msieve  27  20090121 18:37 
How much time  Unregistered  Information & Answers  4  20081220 21:00 
Accuracy of completion date estimates?  kdq  Software  4  20081004 05:02 