-   Operation Kibibit (
-   -   Call for volunteers: RSA896 (

jasonp 2012-11-20 16:02

CADO does not support GPUs; other researchers have substituted their own GPU components for the corresponding pieces in CADO-NFS (i.e. make the Block Weidemann code use a GPU for matrix multiplies).

It's an open question whether lowering the target norm makes it easier to find good hits; it certainly drastically reduces the number of hits found, and you would think that it would help the size optimization deal with the low-order coefficients if the top few polynomial coefficients were much smaller than average. But I don't know that, so far all the hits I've found have led to polynomials a lot worse than their current best one.

Delivering big batches of hits isn't a problem; I can take them by email or you can upload to Rapidshare or some other online storage service if the results are more than a few megabytes compressed.

Greg: I don't know how many hits we should look for. Nobody has ever done a massively parallel polynomial search, as in throwing BOINC at it for months. For RSA768 we spent three months and collected something like 15 million stage 1 hits, and the best resulting polynomial was just about as good as the one actually used for the sieving. For RSA896 the current code divides the search space for each leading coefficient into 46 pieces, so searching one of those only exhausts 2% of the available search space. A trillion hits would probably be too much to deal with, though it would be awesome :)

frmky 2012-11-20 18:10

What's the Murphy e of their current best?

jasonp 2012-11-20 18:26

4.8e-19; Paul tells me that the best they expect is 20e-19.

frmky 2012-11-20 19:35

I ran stage 2 on a single core overnight, and the best was 1.704e-19.

RichD 2012-11-22 19:05

Thanks to Jason's help I've got a run started at 5*10^13.

It looks like I am getting around ten per minute in this range.

Correction: Better than 25/min. (I can't divide...)

frmky 2012-11-23 08:15

Great minds think alike. I started one GPU at 10^13, same as Jason, and the second at 5*10^13, same as you. :smile:

So far it's done 10^13 to 10^13+2.5*10^8 with 131k hits and 5*10^13 to 5*10^13+3*10^8 with 152k hits.

RichD 2012-11-23 19:17

Just over 24 hours I am approaching 38k but still just under
5*10^13 + 1*10^8.

poily 2012-11-28 17:57

I got 417K stage1 hits for
msieve -np1 "stage1_norm=1e33 100000000000000,1000000000000000"[/CODE]
should I run stage 2 on them or better give them to somebody?

jasonp 2012-11-29 01:24

Maybe post to rapidshare or some similar storage hosting?

poily 2012-11-29 14:03

Ok, here's my rsa896 stage1 data for [URL=""]1e13[/URL] and [URL=""]1e14[/URL]. I'll try to run stage2 on part of it.

jasonp 2012-11-29 14:42

Okay. When running stage 2, you will save a lot of time if you first run with

-nps "stage2_norm=1e100"

so that only the size optimization runs for every hit. On my machine running the size optimization on the 250k hits I have took about 36 hours. Then sort the resulting .ms file in order of increasing score (which is the last field in each line) and just run msieve with -npr on the few lines in the .ms file that have the smallest score, perhaps only the 1000 best hits. The root sieve runs faster with a tight bound on the stage 2 norm, so try

-npr "stage2_norm=1e37"

or possibly smaller if that winds up taking too long. We've found that if you run all of stage 2 on every hit you spend the majority of the time running the root sieve on hits that will never generate very good polynomials, and the root sieve can take minutes for a single hit (where the size optimization takes half a second)

All times are UTC. The time now is 14:29.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.