View Single Post
Old 2021-09-17, 16:52   #43
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

10011110010102 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
Since GPU LA is so fast, should we rethink how many relations are generated by the sieving process?

In principle, yes. There's an electricity savings to be had by over-sieving less and accepting larger matrices, especially on the e-small queue where matrices are nearly all under 15M. However, one can't push this very far, as relation sets that fail to build any matrix delay jobs and require human-admin time to add Q to the job. I've been trying to pick relations targets that leave jobs uncertain to build a matrix at TD=120, and I advocate this for everyone on e-small now. Some of the bigger 15e jobs could yield matrices over 30M / over the memory capabilities of GPU-LA, so maybe those shouldn't change much?

Another way to view this is to aim for the number of relations one would use if one were doing the entire job on one's own equipment, and then add just a bit to reduce the chance of needing to ask for more Q from admin (like round Q up to the nearest 5M or 10M increment).
VBCurtis is offline   Reply With Quote