View Single Post
Old 2020-09-02, 23:36   #122
charybdis
 
Apr 2020

11100112 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Sorry for the delay, been busy with some data-gathering for nfs@home queue planning.

A params.C185 file should have the usual 25-30% increase in lim's, and we should test 32/32 against the current setting.

If we stay with 31/32, I'd add another 20-30M relations wanted. 32/32 should be 30% higher than that to start with.

Poly select should be about double the C180 file- say, 60% increase in admax and 25% increase in P.
This is what I've got for c180:

Code:
###########################################################################
# Polynomial selection
###########################################################################

tasks.polyselect.degree = 5
tasks.polyselect.P = 2500000
tasks.polyselect.admin = 10080
tasks.polyselect.admax = 22e5
tasks.polyselect.adrange = 1680
tasks.polyselect.incr = 210
tasks.polyselect.nq = 15625
tasks.polyselect.nrkeep = 96
tasks.polyselect.ropteffort = 35

###########################################################################
# Sieve
###########################################################################

tasks.I = 15
tasks.qmin = 20000000
tasks.lim0 = 95000000
tasks.lim1 = 135000000
tasks.lpb0 = 31
tasks.lpb1 = 32
tasks.sieve.mfb0 = 58
tasks.sieve.mfb1 = 90
tasks.sieve.lambda0 = 2.07
# tasks.sieve.lambda1 = 3.01 ?? would match what we've done with lambda0
tasks.sieve.ncurves0 = 20
tasks.sieve.ncurves1 = 13
tasks.sieve.rels_wanted = 300000000 # for a single machine; I've been aiming for around 320M
tasks.sieve.qrange = 5000
The polyselect parameters won't be optimal, but at least they produce decent polys.
The lims probably aren't optimal either. Optimising them would probably require running the same number lots of times - easy enough at c120, but a bit of an issue at c180...

Quote:
Edit: I'd also raise qmin to 25M or 30M. The most recent CADO-factorization paper mentions that controlling the qmax/qmin ratio helps to control the duplicate rate; so as our jobs get tougher and sieve up to larger Q's, qmin should rise as well. If I understood what they said properly (a weak assumption), a ratio of 7 is a decent target, and duplicate-rates get poor once the ratio exceeds 10. We saw that back when I suggested qmin of 500k, and their paper agrees with the data you gathered. We expect Q-max of 175-200M, I think?
Thanks for sharing this! A ratio of 7 does indeed line up well with what I found. I'll try (edit: changed a bit to reflect Curtis's draft c185.params)
Code:
tasks.I = 15
tasks.qmin = 30000000
tasks.lim0 = 125000000
tasks.lim1 = 175000000
tasks.lpb0 = 31
tasks.lpb1 = 32
tasks.sieve.mfb0 = 58
tasks.sieve.mfb1 = 90
tasks.sieve.lambda0 = 2.07
tasks.sieve.ncurves0 = 20
tasks.sieve.ncurves1 = 13
for the first c184, and we'll see if you're right about needing an extra 20M-30M relations. The next number can be the trial run for 32/32.

Last fiddled with by charybdis on 2020-09-02 at 23:40
charybdis is offline   Reply With Quote