-   Operation Kibibit (
-   -   Call for volunteers: RSA896 (

Stargate38 2013-02-03 18:04

How many hits are needed? What's the ETA after the required number of hits is reached?

jasonp 2013-02-03 19:14

The answer to your first question is that nobody knows. On theoretical grounds we're supposed to be able to find polynomials that are drastically better than what we've been seeing, so if the 'time constant' of the search is measured by the time it takes to get really lucky once, then we've barely started. The previous search for RSA768 generated 30 million hits, one of which turned into a polynomial whose performance was indistinguishable from the one actually used in 2008-2010

My archives now contain 21 million hits (thanks everybody!), and Paul's group has posted 18 million more found with CADO NFS (they have a large number of CPUs searching). All of the hits we've found have passed through the CADO size optimization (except Greg's large batch above), and though we've found some polynomials in their top 10 we haven't gotten near their current best.

That being said, only a little of the current dataset (perhaps 5%) has passed through Msieve's size optimization. Could I persuade everyone to take a break and run the size optimization on the 39 million hits we have? This uses CPU only, and the current code can optimize a single polynomial in about 0.5 seconds, so we're looking at about 30 CPU-weeks of work to isolate the ~5000 best-scoring polynomials. Running the root sieve on those will take about 5 CPU days.

To avoid me having to upload a blizzard of files, can I get 8 CPUs worth of volunteers? This job will be much easier if you can use unix text-processing tools, and I'll tell you how to run Msieve and postprocess the output.

(Paul's group estimated that their best polynomial as of a few months ago would require 35000 CPU-years to complete the sieving)

debrouxl 2013-02-03 19:33

I can use 4 hyperthreads of the usual Xeon E3-1230 through MPI-patched root optimization.

Dubslow 2013-02-03 20:16

I can throw in 1-2 quad core Sandy Britches (in a few days). GNU-Linux of course :smile: (I had been looking for a next project to keep my cores busy, good timing :smile:)

WraithX 2013-02-03 21:15

[QUOTE=jasonp;327353]To avoid me having to upload a blizzard of files, can I get 8 CPUs worth of volunteers? This job will be much easier if you can use unix text-processing tools, and I'll tell you how to run Msieve and postprocess the output.[/QUOTE]

I have a 12-core (dual 6-core Xeon 5645) linux computer that I can dedicate to this. I'd be happy to help out.

jasonp 2013-02-03 22:16

Okay, I'll start uploading files. Instructions:

- download your file (they're ~130MB)
- unzip and rename to msieve.dat.m
- with worktodo.ini set for RSA896, run
msieve -v -i <rsa896 worktodo> -nps "stage2_norm=1e100"
- the resulting output file is; find the best size scores with
sort -g -k 11 | head -5000 > sizeopt.out
(Note for windows that this is the unix sort, not the crappy MS sort)
If you want to manually split the file to run across multiple cores, rename each piece to <something_unique>.m and add '-s <something_unique>' to the msieve command line. The optimized output will be in <something_unique>.ms

- send me your top hits somehow. If you can't stand the suspense, rename your list of best hits to <something_unique>.ms and run the root sieve yourself with
msieve -v -i <rsa896 worktodo> -npr -s <something_unique>
Note that it isn't necessary to run the root sieve on all 5000 candidates; the odds are overwhelming that the best result your batch produces will come from one of the top 100 size-optimized polynomials. Also, note that the latest Msieve SVN has an adjustment to the alpha computation that will bias the E-value downwards, but the modified score exactly matches what the CADO tools report.

I'll update this post with the other files (8 total). Each file has 5M hits, and would take about 30 days on one core.

debrouxl: [url=""]download 1[/url]
WraithX: [url=""]download 2[/url]
dubslow: [url=""]download 3[/url]
firejuggler: [url=""]download 4[/url]
WraithX: [url=""]download 5[/url]

dubslow: [url=""]download 6[/url]
poily: [url=""]download 7[/url]
debrouxl: [url=""]download 8[/url]
WraithX: Greg's pile in post #110

Dubslow 2013-02-03 23:19

Having succesfully started the large LA with many less threads than I initially thought, I can now put at least 4 full cores on this right now, with maybe a few extra threads here and there. (Well, after the super bowl :smile:)

firejuggler 2013-02-03 23:46

i'll claim the third file if nobody else want it

Dubslow 2013-02-03 23:52

I certainly do want it :razz::smile:

WraithX 2013-02-04 04:35

[QUOTE=jasonp;327372]debrouxl: download 1
WraithX: download 2
dubslow: download 3
firejuggler: download 4

download 5[/QUOTE]

I'll take number 5 too.

Dubslow 2013-02-04 06:13

If I may, I would recommend [URL=""][/URL] for any more of these transfers. It's completely free, and more importantly, is free of any intrusive ads, doesn't have any bandwidth restrictions, doesn't try to sign you up with your credit card, and doesn't force you to wait 60 seconds to begin the download (and you don't have to press like four buttons).

Edit: Easy way to split up the file into multiple chunks: `more +<1> <filename> | head -<2>`, where <1> is the beginning line number, and <2> is how many lines. So to split up a 1000 line file into 4 chunks, you'd do
[code]for num in 1 251 501 751; do
echo $num
more +$num <filename> | head -250 > <filename>.$num
done[/code] (or something similar).

All times are UTC. The time now is 23:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.