View Single Post
 2020-05-11, 14:07 #106 charybdis   Apr 2020 1618 Posts 58.3M CPU-seconds of sieving gave me this: Code: Mon May 11 13:07:16 2020 commencing relation filtering Mon May 11 13:07:16 2020 setting target matrix density to 100.0 Mon May 11 13:07:16 2020 estimated available RAM is 15845.4 MB Mon May 11 13:07:16 2020 commencing duplicate removal, pass 1 ... Mon May 11 13:29:01 2020 found 54632218 hash collisions in 211322739 relations Mon May 11 13:29:23 2020 added 122394 free relations Mon May 11 13:29:23 2020 commencing duplicate removal, pass 2 Mon May 11 13:33:23 2020 found 64594317 duplicates and 146850816 unique relations ... Mon May 11 14:39:15 2020 matrix is 17010145 x 17010371 (6703.3 MB) with weight 1760454238 (103.49/col) Mon May 11 14:39:15 2020 sparse part has weight 1587118135 (93.30/col) Mon May 11 14:39:15 2020 using block size 8192 and superblock size 884736 for processor cache size 9216 kB Mon May 11 14:39:58 2020 commencing Lanczos iteration (6 threads) Mon May 11 14:39:58 2020 memory use: 6443.4 MB Mon May 11 14:40:41 2020 linear algebra at 0.0%, ETA 127h46m Probably needs a bit more sieving. Poly score was similar to the last two jobs, so 31/31 seems to be a little slower than 31/32, especially when you account for the more efficient qmin on this job. Worth trying mfb0 = 59 (this was with 58), or would that be unlikely to make a difference? (Also a c168 Homogeneous Cunningham has popped up - what params would you suggest if I decide to do that next?)