![]() |
![]() |
#386 |
"Curtis"
Feb 2005
Riverside, CA
22×13×89 Posts |
![]()
I'm having some trouble controlling CADO, and haven't yet managed to test-sieve the two polys. I'm using tasks.threads = 20, nrclients = 5, and 4 threads per las client, but one invocation is spinning up 10 or more clients anyway.
Going to try restarting the server; Ed, I see you just aimed one client at the job- that may not get work for a while until I get things straightened out and get load below 4 * corecount on the server. |
![]() |
![]() |
![]() |
#387 |
"Ed Hall"
Dec 2009
Adirondack Mtns
DDA16 Posts |
![]()
I didn't see this until now. I've added three more and all appear to be running. But, they are all near the time limit if you're still at 3600. I don't think I have any other machines that would handle the memory need. I seem to be using about 7G RAM and that's too close for my 8G machines.
|
![]() |
![]() |
![]() |
#388 |
"Curtis"
Feb 2005
Riverside, CA
22·13·89 Posts |
![]()
Strange- my clients are using 5.1G. I still don't have the right number of clients going, but it's 4 this time instead of 10 so I'll cope.
The overload (20 clients on I=16) overnight meant I didn't get to compare polys, and now Ed is helping one client so my plan to compare ETAs is out the window. Poly #2 is 5% worse than #1, safe bet it wasn't going to be faster anyway. So, we're in production. A=30, 268/400M lim's, 33/64-95 large prime bounds. Basically, I copied the Kosta C198 setup file, but started Q at 20M since Charybdis' research on GNFS-178ish has shown that really low Q produces so many duplicates that it's not worth searching. Workunits are Q=10k (EDIT: 5k, misremembered) each, and take something like 5 thread-hours (Ivy Bridge 2.6ghz). I don't think they time out after 3600sec, but I'll keep an eye out to see if that policy applies to remote clients but not to localhost clients. Last fiddled with by VBCurtis on 2020-05-28 at 23:58 Reason: 10kQ -> 5kQ corrected |
![]() |
![]() |
![]() |
#389 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2·32·197 Posts |
![]()
Three clients are running in the 4x minute range, but one is running 52/56/53, etc. I might give a try to adding more, but I will try to be cautious in any additions.
|
![]() |
![]() |
![]() |
#390 |
"Curtis"
Feb 2005
Riverside, CA
22×13×89 Posts |
![]()
after 20 hours, we have done almost 2.5MQ, 13.9M raw relations. Yield is just under 6.
I have seen no errors on the server, no workunits timing out. There were a couple of reissued WUs because I mistyped the bindir= flag on my clients, but no "real" errors. I didn't test-sieve so I don't know how yield will hold up, but we may get 300M+ relations by the time we reach Q=100M! That would leave a no-big-deal job for 15e queue, something like 100-450M. We don't have to get to Q=100M, specifically- it's reasonable to start the ggnfs portion of the job at 80M if interest in this job wanes. |
![]() |
![]() |
![]() |
#391 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2·32·197 Posts |
![]()
eFarm.78 is bordering on the time limit. The last run took 59:45. I'm still confused about core/thread/memory. This machine, that's slowest, is an i7 (4c/8t) with 50% more memory than any of the others and that memory has three channels. The two i5s with only a third the memory and no hyper-threads are way ahead of this i7 time-wise. I might see what happens if I knock 78 down to 4 threads, but in theory, I would expect a time overrun. I'm still also wondering if any of this might be disk access timing, since the two i5s have ssd's.
|
![]() |
![]() |
![]() |
#392 |
Apr 2020
132 Posts |
![]()
Curtis, I've thrown a few more clients your way - should speed things up a little. Forgot to set the number of threads on one of them at first, so you'll get an expired WU at some point.
|
![]() |
![]() |
![]() |
#393 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
67328 Posts |
![]() Quote:
I've changed eFarm.78 to 4 threads to see what will happen with it. This one I do expect to timeout, if the server is set to 3600 for clients. But, maybe I'll be pleasantly surprised and find 4 threads better than 8. |
|
![]() |
![]() |
![]() |
#394 |
"Curtis"
Feb 2005
Riverside, CA
22×13×89 Posts |
![]()
My guess is the timeout is for poly select workunits, because they are prone to get "stuck" and because one late WU can delay an entire factorization by quite a bit; while a missing WU on sieving makes no difference.
We'll find out this weekend, seems! |
![]() |
![]() |
![]() |
#395 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×32×197 Posts |
![]() Quote:
If I do create timeouts, let me know and if they are too common, I'll stop "playing." |
|
![]() |
![]() |
![]() |
#396 | |
Apr 2020
132 Posts |
![]() Quote:
Edit: I'm also noticing that the 6-thread machines aren't quite running at full load; I'm seeing load averages around 5.75. This isn't happening with the 4-thread machines. Maybe there is some inefficiency in running more than 4 threads per client? Last fiddled with by charybdis on 2020-05-29 at 22:24 |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Primes in n-fibonacci sequence and n-step fibonacci sequence | sweety439 | And now for something completely different | 17 | 2017-06-13 03:49 |
Team sieve #41: C165 from 3366:i2098 | RichD | Aliquot Sequences | 36 | 2013-11-29 07:03 |
80M to 64 bits ... but not really reserved | petrw1 | Lone Mersenne Hunters | 82 | 2010-01-11 01:57 |
What's the next in the sequence? | roger | Puzzles | 16 | 2006-10-18 19:52 |
Sequence | Citrix | Puzzles | 5 | 2005-09-14 23:33 |