mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > CADO-NFS

Reply
 
Thread Tools
Old 2020-04-30, 22:07   #12
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

13CB16 Posts
Default

We've reached Q=20M, with 37.7M relations found. Yield is now above 2.0 average, and the last 8.7MQ got us 19.5M relations for a yield of 2.2ish since my update yesterday. If yield stays put, that's ~160M relations from Q=2-80M. Yield on ggnfs at 700M was around 1.1, so we're saving a Q-range of ~140M by doing this CADO effort.

The job has been posted to the 15e queue, but is a ways down the list; I think we'll finish this CADO effort before the ggnfs relations are ready from 15e.
VBCurtis is offline   Reply With Quote
Old 2020-05-03, 02:50   #13
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

13CB16 Posts
Default

Today's update:
Q=34.2M, 72.3M relations found.
The server got stuck for a few minutes, and then processed ~40 workunits in a single burst. No idea what goes on in the database, but at least I didn't need to restart it this time!
VBCurtis is offline   Reply With Quote
Old 2020-05-03, 15:04   #14
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

100138 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Today's update:
Q=34.2M, 72.3M relations found.
The server got stuck for a few minutes, and then processed ~40 workunits in a single burst. No idea what goes on in the database, but at least I didn't need to restart it this time!
My machines had trouble uploading WUs for two periods, yesterday: the first at around 5PM Eastern and the second at around 9PM Eastern. Examples from one of my machines show uploads that usually take about two seconds, taking as long as 23 minutes and 48 seconds:
Code:
upload example 1:
start - - 17:03:32
complete- 17:21:10


upload example 2:
start - - 17:32:01
complete- 17:55:49


upload example 3:
start - - 21:42:07
complete- 21:51:27
EdH is offline   Reply With Quote
Old 2020-05-03, 16:38   #15
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3·372 Posts
Default

Curiosity:

All WUs appear to be the same size (2000). I have two i5's that do not hyperthread, so they are running -t 4. I have several i7's that do hyperthread, so they are running -t 8. All have at least 8GB RAM. Timewise, the two i5's are just about keeping up with the i7's in completing their WUs.

The i5's are running at about 3200 MHz, while the i7's are running at about 3400 MHz.

The only thing I see in the i5's favor, is that they have SSDs. Is there that much drive activity to account for these observations? Or, is it something to do with hyperthreading overhead?
EdH is offline   Reply With Quote
Old 2020-05-03, 19:32   #16
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

32·563 Posts
Default

The only relevant data I've taken is running 6 threads on a 6-core, and then 12. The 12-threaded job was about 20% faster than the 6 threaded job.
I've no idea why a faster i7 would take as long as the i5, unless the i5 is a newer generation with newer instructions.
VBCurtis is offline   Reply With Quote
Old 2020-05-04, 00:11   #17
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

10000000010112 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
The only relevant data I've taken is running 6 threads on a 6-core, and then 12. The 12-threaded job was about 20% faster than the 6 threaded job.
I've no idea why a faster i7 would take as long as the i5, unless the i5 is a newer generation with newer instructions.
That may very well be the difference. The i5's are much newer than the i7's.

BTW, as I write this, all of my machines are complaining:
Code:
2020-05-03 20:07:55,928 - ERROR:root:Upload failed, URL error: <urlopen error [Errno 111] Connection refused>
2020-05-03 20:07:55,928 - ERROR:root:Waiting 10.0 seconds before retrying (I have been waiting since 1930.0 seconds)
and:
Code:
INFO:root:spin=44 is_wu=True blog=0
INFO:root:Downloading http://TheMachine.dyn.ucr.edu:44455/cgi-bin/getwu?clientid=eFarm.20 to download/WU.eFarm.20117763498 (cafile = None)
ERROR:root:Download failed, URL error: <urlopen error [Errno 111] Connection refused>
EdH is offline   Reply With Quote
Old 2020-05-04, 00:51   #18
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

117138 Posts
Default

Yea, CADO quit; when I tried to restart it, I got the error message that we hit the max failed workunits of 100. Your machines killed us!

I set the new max to 1000, which should last us the duration of this effort. The server is back up.
VBCurtis is offline   Reply With Quote
Old 2020-05-04, 01:16   #19
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3×372 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Yea, CADO quit; when I tried to restart it, I got the error message that we hit the max failed workunits of 100. Your machines killed us!

I set the new max to 1000, which should last us the duration of this effort. The server is back up.
Apologies!

But, mine shouldn't be failing that much now, unless it's possibly due to "tasks.wutimeout = 3600 # one hour." My slower machines are taking less than 30 minutes to complete, other than not being able to report.

I probably used up some of the "failed" quota in the beginning, but I have all the scripts doing a good job of gracefully ending after a submission, now. That means the curfewed ones don't leave anything unfinished. I did have a couple machines with the "condition most_full" failure - one had several. I installed a brand new CADO-NFS on that one and haven't seen the error anymore.
EdH is offline   Reply With Quote
Old 2020-05-05, 19:07   #20
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

32×563 Posts
Default

Update: Q=49.2M, 111.2M total relations. Average yield: 111.2/47.2 = 2.36.

Yield since last update (Q=34.2M): 38.9M / 15M = 2.59. At the current yield, we'll get ~75M more relations for a total approaching 190M relations. That leaves ~850M for nfs@home to sieve.

We're running just over 5MQ a day, and I just added 10 threads. If Ed continues his support, we'll finish Monday the 11th.
VBCurtis is offline   Reply With Quote
Old 2020-05-05, 19:35   #21
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3×372 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Update: Q=49.2M, 111.2M total relations. Average yield: 111.2/47.2 = 2.36.

Yield since last update (Q=34.2M): 38.9M / 15M = 2.59. At the current yield, we'll get ~75M more relations for a total approaching 190M relations. That leaves ~850M for nfs@home to sieve.

We're running just over 5MQ a day, and I just added 10 threads. If Ed continues his support, we'll finish Monday the 11th.
I expect to see this through and even just added the machine that finished the c178 HCN today.
EdH is offline   Reply With Quote
Old 2020-05-07, 01:36   #22
axn
 
axn's Avatar
 
Jun 2003

10100001110112 Posts
Default

Server unreachable for 40 minutes. Problem at my end or server?
axn is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Team sieve for OPN - 70841^53-1 RichD NFS@Home 26 2016-11-18 07:55
Team sieve #22: c166 from 3270:620 fivemack Aliquot Sequences 55 2011-02-15 23:01
Team Sieve for 2995125705 SlashDude Riesel Prime Search 78 2006-05-14 16:56
Team Sieve grobie Riesel Prime Search 3 2005-11-16 08:46
Team Sieve of 210885 SlashDude 15k Search 21 2003-12-23 16:31

All times are UTC. The time now is 14:58.


Tue Nov 30 14:58:40 UTC 2021 up 130 days, 9:27, 0 users, load averages: 1.78, 1.51, 1.46

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.