View Single Post
Old 2021-07-18, 17:47   #3
charybdis's Avatar
Apr 2020

10448 Posts

Originally Posted by bur View Post
How is test sieving done with CADO? I thought to just start it normally and have it sieve for a defined time or percentage, but that doesn't measure how strongly yield will decrease at larger q. How is it usually done?
Depends how thoroughly you're test-sieving

If you're not going to be changing the parameters much then what wombatman suggests should be fine. If you're going to be testing lots of different settings, especially on large jobs, you might get frustrated at the length of time CADO takes to generate free relations at the start of each job, when they aren't even used until filtering. In this case you may want to test-sieve manually. Here's how to do that.

You will need to use the makefb and las binaries, located in the cado-nfs/build/[machine name]/sieve directory.

First run a command like

makefb -poly [path to poly file] -lim [largest lim1 you might use] -maxbits [largest I you might use] -out [path to output file] -t [threads]
to generate the factor base file. The output file name should be something like jobname.roots1.gz. You will only need to run this command once.

Then, to test-sieve, run

las -poly [path to poly file] -I [I] -q0 [start of Q range] -q1 [end of Q range] -lim0 [lim0] -lim1 [lim1] -lpb0 [lpb0] -lpb1 [lpb1] -mfb0 [mfb0] -mfb1 [mfb1] -fb1 [path to factor base file] -out [path to output file] -t [threads] -stats-stderr
You can add more commands like -lambda0, -ncurves0, -adjust-strategy if you like. CADO usually outputs gzipped relation files, but if you don't give the output file a .gz ending it won't do this and you'll be able to read the stats at the end of the file without decompressing. I'd give the output file a name that makes it clear to you which parameters you used.

Last fiddled with by LaurV on 2021-07-19 at 04:19 Reason: fix run-away code tag
charybdis is offline   Reply With Quote