Register FAQ Search Today's Posts Mark Forums Read

2009-06-09, 09:25   #199
cipher

Feb 2007

211 Posts

Quote:
 Originally Posted by Lennart I have tested a small range on 20100T 224sec/factor LLR on the same computer 300 sec 231sec *1,3=300 sec If thats corect we can sieve some more. This is on a q6420@2,2Ghz sieved with tpsieve Lennart
Lennart the problem with Sieving/Testing small range is Sieved candidates are clustered together, so if you hit a cluster you will get smaller time per candidate for example:
Code:
03:42:04 11481491 k's remaining. p=16023844223723527 divides k=87072309039
03:45:46 11481490 k's remaining. p=16023862458768221 divides k=64669832985
03:46:02 11481489 k's remaining. p=16023863669411093 divides k=93525250179
03:55:14 11481488 k's remaining. p=16023907257388957 divides k=81711418425
04:14:22 11481487 k's remaining. p=16024009456250219 divides k=64782567981
04:17:10 11481486 k's remaining. p=16024025948259773 divides k=80677472505
How it finds two candidate at 16 seconds apart and then next one 9min approx and one after that 19 min and than 3 min apart. hence you have to do atleast 10T or get an average time over a whole range.

Example: for my range of 8.644T it found 329 sieve candidates and my one core does approx 105Mp/sec hence it will take 8644/0.105=88200seconds
88200seconds/329candidates = 250seconsds or 4:16sec per candidate.

2009-06-09, 09:46   #200
Lennart

"Lennart"
Jun 2007

100011000002 Posts

Quote:
 Originally Posted by cipher Lennart the problem with Sieving/Testing small range is Sieved candidates are clustered together, so if you hit a cluster you will get smaller time per candidate for example: Example: for my range of 8.644T it found 329 sieve candidates and my one core does approx 105Mp/sec hence it will take 8644/0.105=88200seconds 88200seconds/329candidates = 250seconsds or 4:16sec per candidate.

I know all that. Now i have done 4hr sieving and i have 214 sec/f (Real time is 107sec/f but i sieve on 2 core)

/Lennart

Last fiddled with by Lennart on 2009-06-09 at 09:49

2009-06-09, 13:41   #201
jmblazek

Nov 2006
Earth

26 Posts

Quote:
 Originally Posted by cipher Lennart the problem with Sieving/Testing small range... ...hence you have to do atleast 10T or get an average time over a whole range. Example: for my range of 8.644T it found 329 sieve candidates and my one core does approx 105Mp/sec hence it will take 8644/0.105=88200seconds 88200seconds/329candidates = 250seconsds or 4:16sec per candidate.
The minimal TPS range we use for timing is 10T. However, we look to 50T and 100T averages to make a determination.

We're using 64bit tpsieve as the standard for the timings since it's a little faster than NewPGen. Of course, different hardware is going to produce different results. It looks like your computer is better off LLRing than sieving.

2009-06-09, 14:29   #202
Puzzle-Peter

Jun 2009

2A516 Posts

Quote:
 Originally Posted by Lennart If that is correct time, we will stop at 20P. We have stoped reservasion and the guy doing the highest range do some timings that we will get today i think. /Lennart
That would be me ;)

Testing on a X5482 @ 3200MHz I got 151 seconds to LLR the highest candidate 99899996781*2^333333-1
The same machine found one factor every 157 seconds in the range 19990T-20000T.

Regards, Peter

-------------------

MooooMoo: The next post and several ones below it refer to TPS's next project: sieving 1<k<10M, 480000<n<500000.

Last fiddled with by MooooMoo on 2009-08-11 at 02:00

2009-08-09, 11:57   #203
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts

482k-483k complete.
http://www.sendspace.com/file/wakik2
Quote:
 Originally Posted by MooooMoo 2.) One combined file is preferable. Anyway, could you provide a software to combine all 1000 separate files into one? 3.) It should be sorted by values of n; all of the k's for n=486000 will be at the top, followed by all the k's for n=486001, etc. The final result will be something like this: Code: 12345 486000 34567 486000 56789 486000 11111 486001 77777 486001 99999 486001 54321 486002 98765 486002
Maybe there's an easier way (e.g. with srfile), but what I did was copy them all together with a DOS prompt (e.g. "copy *.txt tps-482-483.txt"), then use a hex editor (XVI32) to remove every instance of the header, then used TextPad (because it handles huge text files well) to put the header back at the beginning.

 2009-08-09, 13:16 #204 cipher     Feb 2007 110100112 Posts I use something call'd JS TEXT file merger. http://www.tucows.com/preview/373437 It works great. You do have to remove the header or Find & replace also works.
2009-08-09, 17:39   #205
Ken_g6

Jan 2005
Caught in a sieve

5·79 Posts

Quote:
 Originally Posted by cipher Moomooo logistical problem that i see, each 1000n range is approx 100-120MB for 20k range we are looking @ a file size of 2GB. The Ram requirement for initial sieve will be to high. Are you planning to break it down or will you release the batch when it is manageable like under 1GB or something. Thanks.
The initialization requirement can probably be made lower; but even after initialization, the full sieve would take nearly 4GB! Having just used 2.6GB of my 4GB for the first time on one task, I can tell you, that's a lot.

With multiple N's, the time required to sieve increases linearly with N; it's just a constant factor faster than doing the N's independently. So I suggest breaking the sieve into at least 4 sets of N's, to be done in sequence. In the worst case this is equivalent to doing the whole sieve plus (sets) one-N sieves. In the best case, we might get a twin before we get to the later sieve(s).

2009-08-09, 18:30   #206
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

10AB16 Posts

Quote:
 Originally Posted by mdettweiler Ah, I see. I'd never tried srfile with twin-format files, and had assumed that it would work OK with them... Okay, never mind, scratch the whole srfile idea. I guess that won't work after all.
Well, the only real difference is the header, so the only thing really standing in the way is that the header is telling it something it doesn't understand. If the headers were replaced, it would work, but that would probably be enough trouble that it wouldn't be worth using srfile instead of something like what I described. By the way, I should note that I used a hex editor to get rid of all the headers so that I could also get rid of its line, so I replaced the hex for the header and its line break with nothing. You could also, I suppose, put the header back in the beginning with a hex editor instead of a separate app, but XVI32 isn't very good at editing text.

Last fiddled with by Mini-Geek on 2009-08-09 at 18:40

2009-08-09, 19:22   #207
MooooMoo
Apprentice Crank

Mar 2006

2·227 Posts

Quote:
 Moomooo logistical problem that i see, each 1000n range is approx 100-120MB for 20k range we are looking @ a file size of 2GB. The Ram requirement for initial sieve will be to high. Are you planning to break it down or will you release the batch when it is manageable like under 1GB or something. Thanks.
Quote:
 Originally Posted by Ken_g6 The initialization requirement can probably be made lower; but even after initialization, the full sieve would take nearly 4GB! Having just used 2.6GB of my 4GB for the first time on one task, I can tell you, that's a lot.
Yes, it'll be broken down into four parts - 480K-485K, 485K-490K, and so on.

 2009-08-10, 03:28 #208 geoff     Mar 2003 New Zealand 13×89 Posts You might be biting off more than you can chew with this range. tpsieve will need a bitmap with (nmax-nmin)*(kmax-kmin)/6 bits to hold the combined sieve file, so I think that is about 4Gb for 1 <= k <= 10M, 480K <= n <= 500K. Also remember tpsieve is still a lot slower than NewPGen in 32-bit mode. Edit: The sieve efficiency will decrease as the range on n increases. I know there is a trade off because the PRP time also increases as k increases, but you should test carefully whether the trade off is worth it. One way to test is to run without a sieve file (specify the full range with -k -K -n -N and no output file) and sieve for very large factors so that not many are reported. When I added the range-of-n feature I really had in mind a very small range, like less than 10 n's, I hadn't thought about what would happen with a very large range. Last fiddled with by geoff on 2009-08-10 at 03:57
2009-08-10, 04:24   #209
Ken_g6

Jan 2005
Caught in a sieve

5·79 Posts

Quote:
 Originally Posted by MooooMoo Yes, it'll be broken down into four parts - 480K-485K, 485K-490K, and so on.
So that's < 1GB of memory used.

But Geoff's right about the overall speed. I'll hazard an educated guess that each extra K takes 1/10 the time of the first K. That still means that testing each P will take about 500 times as long as normal for the same size range of candidates. How much longer does LLR take with those bigger K's? Twice as long? In that case, we're better off with a range of 10 or maybe 20 K's.

Although if somebody on this math forum can tell me how to quickly solve for all m where:

0 <= k*2^m (mod p) <= r, where r < p, and m is in a given range from 0 to some n
(hint: here, r=10000000 and n=5000)

I might be willing to give that coding challenge a shot.

 Similar Threads Thread Thread Starter Forum Replies Last Post Lennart Conjectures 'R Us 31 2014-09-14 15:14 philmoore Five or Bust - The Dual Sierpinski Problem 66 2010-02-10 14:34 ltd Prime Sierpinski Project 76 2008-07-25 11:44 ltd Prime Sierpinski Project 26 2005-11-01 07:45 R.D. Silverman Factoring 7 2005-09-30 12:57

All times are UTC. The time now is 09:10.

Sun Jan 17 09:10:01 UTC 2021 up 45 days, 5:21, 0 users, load averages: 1.13, 1.36, 1.50