20050927, 20:01  #23  
Sep 2002
Database er0rr
6410_{8} Posts 
Quote:
The shorter the range of "n" you are testing the more FFT jumps are going bias the calculation. 

20050928, 00:29  #24 
A10_{16} Posts 
Curtis,
I like to impress that fact. RMA.NET is set to do fixed n sieving which is already faster than fixed k. Fixed k mode for RMA.NET does need an optimization for large n, but the sweet spot must be clearer than that, for a revision. Someone has to do the math here.... I will credit any contributions with a mathematical basis. TTn Last fiddled with by TTn on 20050928 at 00:29 
20050928, 01:02  #25 
5×13^{2} Posts 
Paul,
I remember simply averaging the range numbers, to get aprox the same formula. We have discussed this before, but could you please elaborate a little more on your formula. What if a machine sieves ultra slow, but can test fast? I will do some timings, and include this in RMA.NET until the complete definition of the sweet spot is clear. 
20050928, 01:31  #26 
Sep 2002
Database er0rr
2^{3}·3·139 Posts 
An LLR test on a 100k number will take 4 times as long as a 50k number. In general, if a number is twice as long as another it will take 4 times a long with LLR. The approxiamate LLR test time is given by c*n^2 where "n" is the number of bits and "c" is a constant.
Imagine the curve n^2 ranging from a minimum "n" to a maximum "n". The xaxis measures "n" and the yaxis measures "n^2". The area under the curve is the total amount of work. Clearly most of the work is for large "n" values  to the right of the graph. The idea is to get the area to the left of the average test equal to the area on the right. Hence the formula I gave. (See the old thread.) This really needs a piccy As I said before, this is an oversimplification because it does not take into account "FFT jumps". Say you calculated by using my formula on 300k400k but there was a FFT change at 360k  more sieving would be needed. It is very complicated to hit "the sweet spot" and is further complicated when you take into account differing hardware resources and particpation levels. If you are using a single machine then my fromula will work to calculate the average. Then get the LLR timing and then sieve to that time limit plus some. This will be the most efficient. 
20050928, 01:50  #27 
Sep 2002
Database er0rr
3336_{10} Posts 
Here's a diagram

20050928, 01:55  #28 
2·3·1,471 Posts 
Ok that sounds clearer to me.
I wil include this and then fine tune it. RMA.NET will have to look at the complete working environment, and make aproximate adjustments. SSE2, CPU speed, memory, etc. I have most of the code kickin around, but some evidence will be needed for the guestimation. 
20050928, 11:18  #29  
May 2003
3×7×11 Posts 
Quote:
Fortunately, getting the balance truly correct 1) a fair bit of hassle, involving comparison of several quantities that can only be approximated. 2) doesn't really save you huge amounts anyway. You're trying to find the global minimum of a semihyperbola, and as long as you're sieving sensibly, you're sitting in a pretty flat section anyway. A little bit to the left or a little bit to the right (sieving depth) and you'll barely notice any change in height (total work factor). Phil 

20050928, 12:12  #30 
1100010110_{2} Posts 
Phil,
That's what I had orinally thought, and found in practice. You are the most credible here, so I will work on this for awhile. 
20050928, 12:43  #31  
May 2003
3×7×11 Posts 
Quote:
Here's the simplest system (but as I said, it's certainly not hasslefree) that's going to be pretty close to optimal. 1) Try to be sieving a range that's much larger than a single FFTsize band. For fixedk hunts, that's good advice generally as the sieve time is proportional to square root of the range size, so wider sieving is more efficient than seperate narrow sieves. (common knowledge). 2) Find out where the next FFT speed drop is, and find the average time for testing candidates _before_ the speed drop. (=(slowest+fastest)/2, see below.) 3) Sieve until the average (use a longer moving average than the one in NewPGen, Paul uses a very narrow FIR, but a slow IIR works better in my experience) sieving removal rate is equal to the rate measured in (2). 4) Hack off everything that's below the FFT cutoff, and PRP it. 5) (in parallel with 4) only considering everything above the FFT cutoff go to 2 Eventually, you'll be left with only a single FFT size band. But don't worry  that requires no special treatment. Within each band, the speed should be linear, so there's no need to take into account the percieved quadratic growth which is a artefact of the FFT growth. Note that the above protocol will cause the sieving cutoffs to be dependent upon the size of the range you initially decide you're interested in. Phil 

20050928, 13:13  #32 
Sep 2002
Database er0rr
2^{3}·3·139 Posts 
Phil's approach is better... I did say the model I suggested was an oversimplification but it works well enough on a very large "n" range, spanning many FFT changes...
When investigating FFT changes, please note the FFT changes are different for nonSSE2 and SSE2enabled computers Last fiddled with by paulunderwood on 20050928 at 13:21 
20050930, 18:43  #33 
"Curtis"
Feb 2005
Riverside, CA
4,243 Posts 
Phil's explanation makes a lot of sense to me, but I think the effect I tried to explain without rigorous math is not taken into account. Let me try to use more math:
When deciding what sieve depth to split off an FFTband, we should compare not the average sieve time, but the marginal cost of leaving that band in the sieve. Let's say the band in question comprises 10% of the width of the sieve. Since sieve speed is inversely proportional to the square root of sieve width, a marginal change of 10% of width results in a speed change of sqrt (1.10) per pvalue tested. This is approximately 5% faster per p if you remove a band 10% of the original sieve. However, if you leave the band in, you find 10% more factors per pvalue checked, with only a 5% reduction in speed per pvalue; this is because the sieve has a constant probability of finding a factor per p,k pair checked, so leaving in the 10% kvalues means 10% more factors found per pvalue. I believe this hidden increase in efficiency on the margin is not lost until average sieve time is double the time to LLR an average member of the band to be split off. I'll try to make a more mathematical argument of this last part next time I dont' have time to flesh it out presently. If you see any holes in the initial part of my explanation, please comment I've not read this anywhere, and tried empirically to find this relationship with a few powers I had... playing with marginals led me to this halfguess. Curtis 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Parallel sieving with newpgen  fivemack  And now for something completely different  3  20170516 17:55 
NewPgen  Cybertronic  Factoring  0  20140322 10:07 
Does NewPGen have a bug?  MooooMoo  Riesel Prime Search  16  20081211 11:46 
Faster sieving with NewPGen and srsieve  geoff  Sierpinski/Riesel Base 5  11  20060710 01:10 
Sieving multiple NewPGen files in a row(How?)  jasong  Software  18  20051230 20:23 