Register FAQ Search Today's Posts Mark Forums Read

2010-06-15, 04:44   #45
Oddball

May 2010

499 Posts

Quote:
 Originally Posted by Oddball With the new sieve, it's expected to take 4-5 hours: 1 hour and 10 minutes using the new sieve to p=100M
Quote:
 Can you double both SieveSize and SmallPrimes and rerun it?
The sweet spot for my PC seems to be increasing SieveSize and SmallPrimes by 50% (SieveSize = 1500000 and SmallPrimes = 9000000).

It then takes 1 hour and 3 minutes to sieve a 1T range to p=160M.

2010-06-15, 12:09   #46
axn

Jun 2003

2×34×29 Posts

Quote:
 Originally Posted by Oddball 3-4 hours to use NewPGen to sieve from p=100M to p=100G.
Sounds wrong. Do you mean 30-40 hrs?

2010-06-15, 21:04   #47
Oddball

May 2010

499 Posts

Quote:
 Originally Posted by axn Sounds wrong. Do you mean 30-40 hrs?
Actually, I meant 15-16 hours, which is 12 hours longer than my original estimate. The time was estimated to be from 7PM - 10:30AM, but I accidentally calculated it as 7PM - 10:30PM

Last fiddled with by Oddball on 2010-06-15 at 21:04

2010-06-15, 22:47   #48
Flatlander
I quite division it

"Chris"
Feb 2005
England

1000000111012 Posts
Clueless.

Quote:
 Originally Posted by axn I am currently trying to compile a Win64 build. But running into some weird runtime error. Need to troubleshoot
I haven't used FreePascal before but fed your text file into 64 bit Lazarus and an .exe fell out!

Anyway, the exe runs okay but I got the following compiler messages:
Code:
lm(44,19) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors.
lm(82,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors.
lm(102,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors.
lm(124,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors.
lm(205,17) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors.
lm(212,48) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors.
lm(218,51) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors.
lm(223,18) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors.
lm(225,51) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors.
lm(312,35) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors.
Project "lm" successfully built. :)
Is the exe 'safe' to use?

(Windows 7 64bit.)

2010-06-16, 00:22   #49
axn

Jun 2003

10010010110102 Posts

Quote:
 Originally Posted by Flatlander Is the exe 'safe' to use?
Should be. I have used "proper" data types, so should be ok. You can spot check a 100G k range against NewPGen, sieved to same depth.

 2010-06-16, 00:25 #50 axn     Jun 2003 2·34·29 Posts Not thinking big enough. SieveSize and SmallPrimes can be bumped up quite a bit. I have tried SmallPrimes of 60e6 (p~1.2e9), and SieveSize of 6e6, and it runs in under 2hrs. That should save a lot more hours off NewPGen sieving. Last fiddled with by axn on 2010-06-16 at 00:34 Reason: Sieve size is 6e6 not 6e9 !
 2010-06-16, 12:49 #51 Flatlander I quite division it     "Chris" Feb 2005 England 31·67 Posts axnSieve rocks! Using SmallPrimes of 60e6 ("p<=1,190,494,759"), and SieveSize of 6e6 as above: 2hr 55m for 1T range on one core of DualCore T4400 laptop. 2.2Ghz, 1Mb L2 cache. Windows 7, 64 bit. Very nice. Uses 1,268,972K of RAM in task manager. As you suggest, I'll compare a sample with NPG's output.
 2010-06-16, 21:30 #52 Flatlander I quite division it     "Chris" Feb 2005 England 31·67 Posts axnSieve SmallPrimes of 60e6, SieveSize of 60e5 produces identical results to NPG over a 0.05T sample. (The only difference was the header where NPG stopped at a P ten less than axnSieve.) Testing underway for 90e6/90e5, P=1,824,261,409. Looks like 2T will take about 6hr 15m. (Uses 1,833,380KB.) I reached a compiler error at 93e6/93e5 but 92e6/92e5 compiled fine. (I won't go that high though.) With 60e6/60e5 a 1T sieve uses 192Mb in NPG so 2T will fit comfortably, but even with 90e6/90e5 I don't think 3T will be <485Mb. (Hmmm. Might be worth tweaking the program to do 2.5T.)
2010-06-16, 21:39   #53
axn

Jun 2003

2·34·29 Posts

Quote:
 Originally Posted by Flatlander With 60e6/60e5 a 1T sieve uses 192Mb in NPG so 2T will fit comfortably, but even with 90e6/90e5 I don't think 3T will be <485Mb. (Hmmm. Might be worth tweaking the program to do 2.5T.)
192 Mb is what newpgen automatically chooses. But it can work with much less while still using fast array mode (96 Mb, maybe even 48 Mb). I am going to go out on a limb and say that 384 MB fast array can handle ranges much larger than 20T (yes, 20, not 2).

PS:- I remember there being a rule saying something like 6 bytes per k. That'd mean 384 MB can handle 64M (=67108864) candidates in fast array mode.

2010-06-16, 22:04   #54
axn

Jun 2003

2×34×29 Posts

Quote:
 Originally Posted by Flatlander Testing underway for 90e6/90e5, P=1,824,261,409. Looks like 2T will take about 6hr 15m. (Uses 1,833,380KB.) I reached a compiler error at 93e6/93e5 but 92e6/92e5 compiled fine. (I won't go that high though.)
If there is real savings to be had in allowing the program to sieve higher that 1G, I have a few ideas that can reduce the memory requirement for the bigger primes possibly allowing you to sieve as high as p=3G.

However, I am not sure it is worth it. Basically, going from 60e6 (~1.2G) to 90e6 (~1.8G) takes you (6h15)/2 - 2h55 = 12 min (assuming both numbers are from the same machine). Implementing the memory saving measures will probably introduce a slow down of 10-15% (pure speculation). Let's say instead of 3hr7 for a 1T range, it takes 3h30. So the effective delta would be 35 min instead of 12 min. Can NewPGen cover the same range (ie 1.2-1.8G) in 30 min?

I realise that, since 1.8G is already done, the correct analysis should be from 1.8G to _optimal sieve point_. Fine. Can you post some timing for NewPGen to take a 1T (or 2T or whatever) range from p=1.8G to p=3G in increments of 0.2G? That'll give me a clue as to what is a good cutoff point.

PS:- There is another idea that'll give me a 5x speed improvement. But this involves sieving candidates out of order (technically, residue classes mod 7*11*13). So the candidates will have to be sorted after the sieving step.

2010-06-16, 22:34   #55
amphoria

"Dave"
Sep 2005
UK

2·19·73 Posts

Quote:
 Originally Posted by axn PS:- I remember there being a rule saying something like 6 bytes per k. That'd mean 384 MB can handle 64M (=67108864) candidates in fast array mode.
Unfortunately that is not what I experienced. With 32.5M candidates it used normal array mode.

 Similar Threads Thread Thread Starter Forum Replies Last Post Xyzzy Hardware 821 2020-09-12 23:56 joblack Hardware 275 2019-08-04 21:07 Oddball Riesel Prime Search 5 2010-08-02 00:11 axn Sierpinski/Riesel Base 5 25 2010-05-28 23:57 Citrix Prime Sierpinski Project 15 2005-08-29 13:56

All times are UTC. The time now is 05:30.

Tue Sep 29 05:30:00 UTC 2020 up 19 days, 2:40, 0 users, load averages: 1.74, 1.52, 1.47