mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Operation Kibibit (https://www.mersenneforum.org/forumdisplay.php?f=97)
-   -   Poly select and test-sieving for RSA232 (https://www.mersenneforum.org/showthread.php?t=24254)

Max0526 2019-04-07 00:27

[QUOTE=VBCurtis;512918]...
If that goes well, we can decide whether to attack RSA-232 or RSA-240? In the meantime, there's no harm in polyselecting for either one. I have msieve aimed at RSA232 right now, and I'll start another thread for RSA-240 poly select sometime this month.[/QUOTE]
Please publish all your parameters and poly select ranges here. That sounds to me as an awesome project to pitch in.

VBCurtis 2019-04-07 00:37

Msieve poly select: (CADO params posted earlier this thread)
Note msieve auto-selects degree 6 for this size.
starting coeff 200k, presently running to 400k at about 20k/day.
stage1_norm chosen 5e28, roughly the tightest that still splits the search region into two parts.
stage2_norm set initially at 1e29, which produced about 25 hits the first day. 200 hits a week sounds a little tight, but we need soooo many GPU-weeks of search that 200/week would still mean root-opting tens of thousands of hits in the long run so it's actually kind of loose.
First day's root-opt produced a 5.12e-17 and 5.10e-17. One of the CADO poly select papers mentioned improving on RSA-768 poly by 5-7%, which would be a score over 7.5e-17; if we're going to get stupid and try to sieve this, I think that's our target score. The CADO folks put something on the order of 20 CPU-years into poly select; I personally use a GPU = 3 CPU cores conversion for my own work, as that's about the power-use equivalent for my 750ti. I don't think there is a magic amount of time to spend, rather a magic poly score to hit; time discussions are more to remind folks that we won't be posting 7's on a daily basis!

Max0526 2019-04-07 04:15

@VBCurtis
Thanks a lot for your parameters for CADO and msieve.

jasonp 2019-04-07 16:47

From previous experience the GPU code is 50-100x faster than CPU code; though the CADO CPU code has undergone much more optimization than the code in Msieve.

henryzz 2019-04-08 13:29

If we are considering a large forum factorisation using CADO can I throw 732541^47-1 via SNFS into the ring for consideration. This number runs into bugs in lasieve which make it nearly unfactorable by those tools). I believe CADO should work fine(I have done test sieving in the past). It is one of the most wanted for the OPN project.

VBCurtis 2019-04-08 17:34

[QUOTE=henryzz;513069]If we are considering a large forum factorisation using CADO can I throw 732541^47-1 via SNFS into the ring for consideration. This number runs into bugs in lasieve which make it nearly unfactorable by those tools). I believe CADO should work fine(I have done test sieving in the past). It is one of the most wanted for the OPN project.[/QUOTE]

What SNFS difficulty is it? What poly score (this most directly translates to GNFS difficulty)?
CADO is not really intended for SNFS jobs, and I don't have any experience with SNFS parameter selection in CADO. I'm not fully opposed to this job, but it doesn't fulfill my goals of making sure I understand and can handle a degree-6-sized GNFS job using the CADO tools.

I am mildly interested in parameter selection for SNFS, but that's separate from this RSA idea.

henryzz 2019-04-08 20:34

[QUOTE=VBCurtis;513115]What SNFS difficulty is it? What poly score (this most directly translates to GNFS difficulty)?
CADO is not really intended for SNFS jobs, and I don't have any experience with SNFS parameter selection in CADO. I'm not fully opposed to this job, but it doesn't fulfill my goals of making sure I understand and can handle a degree-6-sized GNFS job using the CADO tools.

I am mildly interested in parameter selection for SNFS, but that's separate from this RSA idea.[/QUOTE]

[CODE]Msieve v. 1.53 (SVN 998)
Mon Apr 8 21:31:13 2019
random seeds: 92d72f0c b9b65483
factoring 605716904027877980774625455520189647387776352555063757365644672493136637525085152114527251672682055452329862008130550673203343550128250999766605061023948523297828457779191592093682881010498969046911261346842026672855745883554109771998292748069377018429964450347583969787 (270 digits)
searching for 15-digit factors
commencing number field sieve (270-digit input)
R0: 82919274927962023982932249248351337261442889121
R1: -1
A0: -732541
A1: 0
A2: 0
A3: 0
A4: 0
A5: 0
A6: 1
skew 9.49, size 1.556e-14, alpha 2.799, combined = 5.387e-15 rroots = 2[/CODE]

I suspect we would likely do the polynomial selection with msieve regardless for any GNFS so there wouldn't be much difference there. We have done that before.

This should be a bit easier than you suggested I think(not sure about my snfs to gnfs difficulty conversion) 281*0.7=197. It should be a good test of the server/client system for CADO which I don't think we have used publicly on the forum yet. The postprocessing should also be a nice warmup rather than a months long job, Any estimates on memory required?

A nice side effect of choosing this composite is that I already have 8M relations from specialq upto 100k that I found while experimenting with small specialq and cado.

VBCurtis 2019-04-08 21:07

[QUOTE=henryzz;513150][CODE]skew 9.49, size 1.556e-14, alpha 2.799, combined = 5.387e-15 rroots = 2[/CODE]

I suspect we would likely do the polynomial selection with msieve regardless for any GNFS so there wouldn't be much difference there. We have done that before.

This should be a bit easier than you suggested I think(not sure about my snfs to gnfs difficulty conversion) 281*0.7=197. It should be a good test of the server/client system for CADO which I don't think we have used publicly on the forum yet. The postprocessing should also be a nice warmup rather than a months long job, Any estimates on memory required?

A nice side effect of choosing this composite is that I already have 8M relations from specialq upto 100k that I found while experimenting with small specialq and cado.[/QUOTE]

The poly score matches the record polys for GNFS199 or GNFS200. I agree this is a nice warmup for a team-CADO-sieve. Param selection depends on whether we go for I=16 and something like 8-12GB per job (split over, say, 4 threads so one job per typical client machine), or choose I=15 to reduce memory to 2.5-3GB per 4-threaded job at the expense of not really gaining insight into bigger jobs. If this job were our sole interest, I would think I=15 and 34-bit large primes, not least because I want experience with LP>33 in any case.

As for poly select, I think CADO's degree 6 optimizations make msieve second-best. I'm running gpu-msieve for a couple weeks on RSA232 just in case, but I expect CADO to produce better polys.

As for server memory requirements, the developers indicate quite a lot of memory is needed. I'll finish sieving a C186 at the end of april, and plan to closely track memory use in each CADO postprocessing stage on my 64GB machine. My hope is that an upgrade to 128GB RAM with a ~200GB swap partition on SSD is enough to filter any job smaller than C225.

Robert_JD 2019-04-25 19:13

Based upon my experience in successfully factoring RSA-200 in November of last year, such a job would take about 135GB of memory during the filtering/merge stage. I currently have only 128 RAM & was compelled to use a 256 flash drive as a swap partition. Moreover, according to Paul Zimmermann, memory requirements can be reduced by changing the parameter "tasks.filter.target_density from 170.0 to 150." Additional reductions in memory requirements can also be facilitated by reducing the " tasks.filter.purge.keep = 160" parameter. Paul made other suggestions & tips, but I can't remember those details presently, other than the fact that when RSA-220 had been factored, that job took just under 200GB of RAM.:smile:

R.D. Silverman 2019-12-18 16:13

[QUOTE=Robert_JD;514692]Based upon my experience in successfully factoring RSA-200 in November of last year, such a job would take about 135GB of memory during the filtering/merge stage. I currently have only 128 RAM & was compelled to use a 256 flash drive as a swap partition. Moreover, according to Paul Zimmermann, memory requirements can be reduced by changing the parameter "tasks.filter.target_density from 170.0 to 150." Additional reductions in memory requirements can also be facilitated by reducing the " tasks.filter.purge.keep = 160" parameter. Paul made other suggestions & tips, but I can't remember those details presently, other than the fact that when RSA-220 had been factored, that job took just under 200GB of RAM.:smile:[/QUOTE]

Huh? Assuming that the number under discussion is 732541^47-1, I must ask:
What is all this talk about polynomial selection?? This is a C276 SNFS job
and the obvious sextic polynomial should do nicely. This is not a large job by
current standards. lasievee can handle it readily.

henryzz 2019-12-19 09:47

[QUOTE=R.D. Silverman;533171]Huh? Assuming that the number under discussion is 732541^47-1, I must ask:
What is all this talk about polynomial selection?? This is a C276 SNFS job
and the obvious sextic polynomial should do nicely. This is not a large job by
current standards. lasievee can handle it readily.[/QUOTE]
I think that comment might have been going back to the original topic of RSA232.



732541^47-1 runs into bugs in lasieve that make it nearly unfactorable. CADO seems to be the best alternative.


All times are UTC. The time now is 09:30.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.