mersenneforum.org Reserved for MF - Sequence 3366
 Register FAQ Search Today's Posts Mark Forums Read

2020-05-29, 22:28   #397
EdH

"Ed Hall"
Dec 2009

3,527 Posts

Quote:
 Originally Posted by charybdis The first WU didn't restart for me. The original range was 22690000-22695000; I noticed almost immediately that I'd got the wrong number of threads and killed the client. When I started again with --override t 6 the client was given 22695000-22700000 instead. Maybe changing the number of threads made the server think it wasn't the same client?
It could have something to do with cleanup, I suppose. My successes with such have been with my local LAN "farm" and I have to use double CTRL-C to ensure my scripts don't remove the download files.

Thanks for checking.

 2020-05-29, 22:30 #398 EdH     "Ed Hall" Dec 2009 Adirondack Mtns 3,527 Posts Here are the results for the process/thread tests: Code: eFarm.78 overran 23335000-23340000 at 89:54 (5394s) eFarm.78b overran 23370000-23375000 at 95:38 (5738s) I have returned eFarm.78 to one process. I have added eFarm.19 and eFarm.20 for testing. Although an i7 and i5, respectively, each with enough RAM, I don't really expect them to succeed. They are set to only run once. I will evaluate their runs later.
2020-05-29, 22:38   #399
charybdis

Apr 2020

3×5×11 Posts

Quote:
 Originally Posted by charybdis Edit: I'm also noticing that the 6-thread machines aren't quite running at full load; I'm seeing load averages around 5.75. This isn't happening with the 4-thread machines. Maybe there is some inefficiency in running more than 4 threads per client?
I've killed one of these clients and started two clients with -t 3 on the same machine instead. They are both running at a load of almost exactly 3, as expected. I'll watch the timings to see if this is producing a genuine improvement.

 2020-05-30, 00:14 #400 EdH     "Ed Hall" Dec 2009 Adirondack Mtns 3,527 Posts Both 19 and 27 overran 3600 and have been set aside. 78 overran using 4 threads. I've restarted 78 with all 8 threads and will keep an eye on it. That's probably all the testing I'll try. I remember there was a hard stoppage last time due to too many timeouts, caused by my farm. I hope to avoid that issue this time.
2020-05-30, 00:59   #401
charybdis

Apr 2020

3×5×11 Posts

Quote:
 Originally Posted by charybdis I've killed one of these clients and started two clients with -t 3 on the same machine instead. They are both running at a load of almost exactly 3, as expected. I'll watch the timings to see if this is producing a genuine improvement.
Definitely looking like a slight speedup - WUs are taking ~42 min on average with 3 threads compared to ~23 min with 6 threads. I'll switch the other 6-thread machines to running two clients too.

 2020-06-01, 23:51 #402 VBCurtis     "Curtis" Feb 2005 Riverside, CA 10001111111002 Posts Update: Q=43.1M, 114.4M relations. Yield average almost exactly 5.0, down a bit from the 20-25MQ range. Forecast is 350M or so relations by Q=100M (80MQ * 4.x yield). That leaves ~650M for ggnfs; if anyone is willing to test-sieve a bit to see what Q-range that corresponds to, we can get it into the 15e queue shortly. I expect to have time to test-sieve Friday or Saturday, if necessary.
2020-06-02, 00:43   #403
swellman

Jun 2012

55728 Posts

Quote:
 Originally Posted by VBCurtis Update: Q=43.1M, 114.4M relations. Yield average almost exactly 5.0, down a bit from the 20-25MQ range. Forecast is 350M or so relations by Q=100M (80MQ * 4.x yield). That leaves ~650M for ggnfs; if anyone is willing to test-sieve a bit to see what Q-range that corresponds to, we can get it into the 15e queue shortly. I expect to have time to test-sieve Friday or Saturday, if necessary.
I’ll be happy to test sieve from Q=100 until we get ~650M relations, and then queue it in 15e. Just to confirm, we are using the poly in post 374 of this thread, correct? Should I use the same parameters translated from CADO (in which case what are they?) or is using the same polynomial all that matters? In other words, am I free to use 2 LPB / 3 LPB, sieve on either side etc to maximize yield?

 2020-06-02, 05:51 #404 VBCurtis     "Curtis" Feb 2005 Riverside, CA 22×1,151 Posts Correct poly. I used 268/400M for lim's, and 33LP 64/95 for mfb's. Basically, I chose params as if ggnfs was all we were using for the job, also the ones we used for the job that we just sieved in this same hybrid way. If you leave 33LP and lim's the same, anything else should be fine to change as far as generating compatible relations- and Greg stated flatly for 2,1165+ that even lim's can be changed without worry (let's not try that here, ok?).
 2020-06-07, 17:41 #405 VBCurtis     "Curtis" Feb 2005 Riverside, CA 22·1,151 Posts Update: We're at Q=88.2M, so CADO sieving will wrap up tonight or Monday morning. 280M relations so far, looks like total from our CADO effort will be 320M or so.
 2020-06-08, 21:43 #406 VBCurtis     "Curtis" Feb 2005 Riverside, CA 22×1,151 Posts We have reached the end of the CADO portion of the sieving for this C197. Results files at Q=99.8 and 99.9M are coming in now. If you get a workunit above 100M, please shut down your client. I'll kill the server in half an hour or so when the incoming WUs exceed 100M. 319M raw relations; I'll post more stats after I kill the server and uniquefy the results.
 2020-06-10, 03:10 #407 VBCurtis     "Curtis" Feb 2005 Riverside, CA 22·1,151 Posts Final tally: 319M raw relations, 251M unique. Unique doesn't mean much until I have the entire dataset.

 Similar Threads Thread Thread Starter Forum Replies Last Post sweety439 And now for something completely different 17 2017-06-13 03:49 RichD Aliquot Sequences 36 2013-11-29 07:03 petrw1 Lone Mersenne Hunters 82 2010-01-11 01:57 roger Puzzles 16 2006-10-18 19:52 Citrix Puzzles 5 2005-09-14 23:33

All times are UTC. The time now is 00:22.

Sat Jan 16 00:22:54 UTC 2021 up 43 days, 20:34, 1 user, load averages: 2.17, 2.34, 2.46