mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Operation Kibibit (https://www.mersenneforum.org/forumdisplay.php?f=97)
-   -   Poly select and test-sieving for RSA232 (https://www.mersenneforum.org/showthread.php?t=24254)

VBCurtis 2019-04-03 00:57

Poly select and test-sieving for RSA232
 
The path for mere mortals to contribute to a factorization of RSA-896 or RSA-1024 may include getting an empirical sense for how our current tools scale for projects of 230+ digits.
As a start on this path, I spent about two core-weeks on CADO poly select for RSA-232:
[code]n: 1009881397871923546909564894309468582818233821955573955141120516205831021338528545374366109757154363664913380084917065169921701524733294389270280234380960909804976440540711201965410747553824948672771374075011577182305398340606162079
skew: 2344396.759
type: gnfs
c0: -4444911653278229819370022568140110402793683027936
c1: -1601077621210143696221661846379560448537934
c2: 10503357361068234616583254671616966819
c3: -106292952091953896748493562554
c4: -2012141349873927583036795
c5: -21775742789901120
c6: -43338240
Y0: -48749394190388072676687368853974603055
Y1: 1395619071010682498582953
# MurphyE (Bf=5.498e+11,Bg=6.872e+10,area=8.590e+16) = 2.92e-08
# found by revision 78c3a74[/code]
I ran this through msieve: size 5.322e-17, alpha -10.267, combined = 5.391e-17 rroots = 6
On our best-polys table, an increase of 16 digits corresponds to an order-of-magnitude decrease in Murphy-E. ~4.7e-14 is the record for 184 digits, ~5.3e-15 is the record for 200 digits, and ~4.6e-16 (deg 6) is the record for 216 digits. So, this score at 5.4e-17 is a decent baseline polynomial to do some parameter-choice work. I tried to feed the polynomial from the RSA-768 factorization paper into msieve to compare scores, but it core-dumped and I couldn't figure out my mistake.
I'll report some test-sieve results using 16f next.

axn 2019-04-03 02:59

[CODE]searching for 15-digit factors
commencing number field sieve (232-digit input)
R0: -1291187456580021223163547791574810881
R1: 34661003550492501851445829
A0: -277565266791543881995216199713801103343120
A1: -18185779352088594356726018862434803054
A2: 6525437261935989397109667371894785
A3: -46477854471727854271772677450
A4: -5006815697800138351796828
A5: 1276509360768321888
A6: 265482057982680
skew 44000.00, size 7.072e-017, alpha -7.298, combined = 7.024e-017 rroots = 6[/CODE]

VBCurtis 2019-04-03 04:23

Thank you!
It should be noted that axn posted the poly for RSA-768; its score is 30% better after 20 core-years of search (~eleven years ago) than my RSA-232 after 2 core-weeks.

VBCurtis 2019-04-04 18:42

RSA768 was factored with the equivalent of I=16 on CADO (same sieve region as NFS@home's 16fV5 siever). I=17 may be faster, but has a memory footprint too large for most users (I failed to get a single instance to start on a 24GB system a few months ago). Alas, I haven't yet figured out how to test-sieve within CADO; so instead I tried a little test with the 16f siever. "top" reported 3.5GB memory use for rlim=980M, alim=1640M. I also tested rlim=alim=1070M, but memory use only dropped to 2.9GB while yield was more than 10% worse and sec/rel was little changed.
My test-sieve used mfbr=71, lpbr=37, mfba=110, lpba=40.
These settings are a bit tighter than those used by the group that factored RSA-768: They used smaller lim's to save memory but 40LP on both sides, mfbr=110, mfba=140 (4 large primes!). I initially tried 107/39 for a-side, but 110/40 had approximately twice the yield.
[code]#37-40/71-110, lambda 2.7/3.7: dQ=500
#Q=100M 7736 rels, 0.296 sec/rel
#Q=600M 5041 rels, 0.441 sec/rel
#Q=1100M 5154 rels, 0.483 sec/rel
#Q=1600M 3088 rels, 0.571 sec/rel
#Q=2100M 1741 rels, 0.634 sec/rel
#Q=2600M 2818 rels, 0.648 sec/rel
#Q=3100M 2683 rels, 0.674 sec/rel
#Q=3600M 3516 rels, 0.696 sec/rel
#Q=4100M 1574 rels, 0.775 sec/rel[/code]
The team that cracked RSA-768 used 40LP on both sides, 3 rational large primes and 4 algebraic large primes. They sieved 64G raw relations, and observed that this was substantial oversieving (the estimated by a factor of 2). So, moving down to 37LP and 2 large primes on r side, with 3 large primes on a side, suggests less than half as many relations would be required; say, somewhere between 25G and 30G relations.
Yield is over 10 from 100M to 1200M, good for ~12G relations.
Yield is over 5 for the rest of the Q-range, say 1200M to 3800M for ~15G relations.
So, as a proof-of-concept, the 16f siever appears sufficient to crack RSA-232. If I had hundreds of cores at my disposal, I'd aim the large-memory cores to CADO with I = 17 to run Q=50M to 200M, and for medium-memory cores I'd use 16f with 3.5GB/core. Once I solve CADO test-sieving, I'll report I=16 memory use and yield.
Sec/rel is measured on a Haswell i7-5820, a 6-core 3.3ghz CPU with 6 other tasks running. At half a second per relation, we're looking at 14 gigacoreseconds, or 450 core-years. This is massively less than the 2000 CPU-years that RSA-768 took, but those were measured on c. 2008 2.2ghz AMD processors. However, my CPU is not 4 times faster than theirs were, which suggests my test-sieving is flawed in some way. I don't see any problem with yields with just 2RLP/3ALP, while the RSA-768 team sieved Q=450M-11G using 3RLP/4ALP; again, this suggests my test-sieving is flawed. Then again, they used rlim=200M & alim =1100M to fit into 2GB/core, which may have impacted yield and efficiency more than I realize. Their matrix was ~190M by 190M, and better param choice + fewer large primes should lead to a yet-smaller matrix for RSA232.
If anyone is interested in taking another baby step toward such a factorization, they are invited to find a better polynomial. Poly select has improved quite a lot in 10 years, so we should be able to beat the RSA-768 score by at least 5%. If we do improve on my poly by 20% or more, a more detailed test-sieve could demonstrate that we need fewer than 400 core-years for sieving, a level possible within this forum.

VBCurtis 2019-04-04 18:51

In case anyone is silly enough to try some polyselect, here is what I ran in CADO for my small search:
[code]tasks.polyselect.degree = 6
tasks.polyselect.P = 40000000
tasks.polyselect.admax = 1e5
tasks.polyselect.adrange = 120
tasks.polyselect.incr = 60
tasks.polyselect.nq = 7776 # this is 6^5
tasks.polyselect.nrkeep = 36
tasks.wutimeout = 36000 # required for rootsieve in degree 6[/code]

I'll have a GPU free this weekend, so I'll do a little msieve-GPU search too; say, 200-400k in degree 6.

pinhodecarlos 2019-04-04 20:39

I’ll help sieving and I can use up to 4GB/thread.

Max0526 2019-04-06 19:13

RSA-768
 
[QUOTE=VBCurtis;512490]...
I tried to feed the polynomial from the RSA-768 factorization paper into msieve to compare scores, but it core-dumped and I couldn't figure out my mistake.[/QUOTE]
Usually because msieve doesn't properly process minuses copied from TEX/PDF documents ( − ). They should be manually changed into regular ones ( - ).

RichD 2019-04-06 21:10

[QUOTE=VBCurtis;512689]... a level possible within this forum.[/QUOTE]

Thinking ahead, perhaps Greg can host a "hidden queue" for us Mersennians to work at our leisure. But who on earth could handle the post-processing?...

Max0526 2019-04-06 22:36

[QUOTE=RichD;512905]Thinking ahead, perhaps Greg can host a "hidden queue" for us Mersennians to work at our leisure. But who on earth could handle the post-processing?...[/QUOTE]
People who develop and host CADO-NFS could do that for sure. They are always at least a step ahead.

VBCurtis 2019-04-06 23:15

[QUOTE=RichD;512905]Thinking ahead, perhaps Greg can host a "hidden queue" for us Mersennians to work at our leisure.[/QUOTE]

Greg has in the past expressed disinterest in projects using LP>33 because of runaway storage needs. Indeed, even if we give up 10+% efficiency to drop LP to something like 35/37 we'll still need on the order of 8G raw relations, which roughly matches the relations count in the entire 14e and 15e queues!
I don't mind buying an extra disk for my workstation and running an internet-facing CADO server on that disk, which would allow us to collect relations at our leisure. Do 15G relations fit into 1GB? Do I need double the disk space (e.g. 2TB dedicated disk) of the expected relation set to handle things like uniqueing, compression, filtering?

Also, thanks to Max for pointing out the minus-sign problem with copy-paste! I did exactly as he suspected.

VBCurtis 2019-04-06 23:22

[QUOTE=RichD;512905]But who on earth could handle the post-processing?...[/QUOTE]
I think this particular candidate falls into a no-man's-land, where the lack of possible record factorization acts against a possible XSEDE proposal yet it's too big for any single machine we have access to. I'm willing to spend the $200 to upgrade my 20-core xeon to 128GB, which is likely enough to solve a matrix from a GNFS-215 or GNFS-220 project, but insufficient for RSA-232. Perhaps a pair of infiniband cards would allow me to solve RSA-232 personally in under a year, but that sounds more like a hope than a plan!

However, if we went after RSA-240, we might be able to get XSEDE to grant cluster time for the matrix. That should take on the order of 1000 core-years of sieving, way too much for forumites alone!
In the interests of gaining experience with big jobs, I've volunteered to postprocess a C206 for swellman that Greg is sieving on 16f; fivemack/Greg are doing the same for the C205 from Aliq 276. If we're serious as a group about doing some quixotic team-sieve, I suggest we find a GNFS in 210 to 218 range to team-factor, so that we can collectively iron out details of data management etc. If my time estimates are correct, we can knock one out with 50 to 100 core-years of sieve, and less than a machine-year of postprocessing; if they're not, I'd rather learn I'm wrong by 50 years than 500!

There are cloud-service machines with big memory that could be rented for this matrix, which would well and truly fit a publication of this factorization using "publicly available tools!"

If that goes well, we can decide whether to attack RSA-232 or RSA-240? In the meantime, there's no harm in polyselecting for either one. I have msieve aimed at RSA232 right now, and I'll start another thread for RSA-240 poly select sometime this month.

Max0526 2019-04-07 00:27

[QUOTE=VBCurtis;512918]...
If that goes well, we can decide whether to attack RSA-232 or RSA-240? In the meantime, there's no harm in polyselecting for either one. I have msieve aimed at RSA232 right now, and I'll start another thread for RSA-240 poly select sometime this month.[/QUOTE]
Please publish all your parameters and poly select ranges here. That sounds to me as an awesome project to pitch in.

VBCurtis 2019-04-07 00:37

Msieve poly select: (CADO params posted earlier this thread)
Note msieve auto-selects degree 6 for this size.
starting coeff 200k, presently running to 400k at about 20k/day.
stage1_norm chosen 5e28, roughly the tightest that still splits the search region into two parts.
stage2_norm set initially at 1e29, which produced about 25 hits the first day. 200 hits a week sounds a little tight, but we need soooo many GPU-weeks of search that 200/week would still mean root-opting tens of thousands of hits in the long run so it's actually kind of loose.
First day's root-opt produced a 5.12e-17 and 5.10e-17. One of the CADO poly select papers mentioned improving on RSA-768 poly by 5-7%, which would be a score over 7.5e-17; if we're going to get stupid and try to sieve this, I think that's our target score. The CADO folks put something on the order of 20 CPU-years into poly select; I personally use a GPU = 3 CPU cores conversion for my own work, as that's about the power-use equivalent for my 750ti. I don't think there is a magic amount of time to spend, rather a magic poly score to hit; time discussions are more to remind folks that we won't be posting 7's on a daily basis!

Max0526 2019-04-07 04:15

@VBCurtis
Thanks a lot for your parameters for CADO and msieve.

jasonp 2019-04-07 16:47

From previous experience the GPU code is 50-100x faster than CPU code; though the CADO CPU code has undergone much more optimization than the code in Msieve.

henryzz 2019-04-08 13:29

If we are considering a large forum factorisation using CADO can I throw 732541^47-1 via SNFS into the ring for consideration. This number runs into bugs in lasieve which make it nearly unfactorable by those tools). I believe CADO should work fine(I have done test sieving in the past). It is one of the most wanted for the OPN project.

VBCurtis 2019-04-08 17:34

[QUOTE=henryzz;513069]If we are considering a large forum factorisation using CADO can I throw 732541^47-1 via SNFS into the ring for consideration. This number runs into bugs in lasieve which make it nearly unfactorable by those tools). I believe CADO should work fine(I have done test sieving in the past). It is one of the most wanted for the OPN project.[/QUOTE]

What SNFS difficulty is it? What poly score (this most directly translates to GNFS difficulty)?
CADO is not really intended for SNFS jobs, and I don't have any experience with SNFS parameter selection in CADO. I'm not fully opposed to this job, but it doesn't fulfill my goals of making sure I understand and can handle a degree-6-sized GNFS job using the CADO tools.

I am mildly interested in parameter selection for SNFS, but that's separate from this RSA idea.

henryzz 2019-04-08 20:34

[QUOTE=VBCurtis;513115]What SNFS difficulty is it? What poly score (this most directly translates to GNFS difficulty)?
CADO is not really intended for SNFS jobs, and I don't have any experience with SNFS parameter selection in CADO. I'm not fully opposed to this job, but it doesn't fulfill my goals of making sure I understand and can handle a degree-6-sized GNFS job using the CADO tools.

I am mildly interested in parameter selection for SNFS, but that's separate from this RSA idea.[/QUOTE]

[CODE]Msieve v. 1.53 (SVN 998)
Mon Apr 8 21:31:13 2019
random seeds: 92d72f0c b9b65483
factoring 605716904027877980774625455520189647387776352555063757365644672493136637525085152114527251672682055452329862008130550673203343550128250999766605061023948523297828457779191592093682881010498969046911261346842026672855745883554109771998292748069377018429964450347583969787 (270 digits)
searching for 15-digit factors
commencing number field sieve (270-digit input)
R0: 82919274927962023982932249248351337261442889121
R1: -1
A0: -732541
A1: 0
A2: 0
A3: 0
A4: 0
A5: 0
A6: 1
skew 9.49, size 1.556e-14, alpha 2.799, combined = 5.387e-15 rroots = 2[/CODE]

I suspect we would likely do the polynomial selection with msieve regardless for any GNFS so there wouldn't be much difference there. We have done that before.

This should be a bit easier than you suggested I think(not sure about my snfs to gnfs difficulty conversion) 281*0.7=197. It should be a good test of the server/client system for CADO which I don't think we have used publicly on the forum yet. The postprocessing should also be a nice warmup rather than a months long job, Any estimates on memory required?

A nice side effect of choosing this composite is that I already have 8M relations from specialq upto 100k that I found while experimenting with small specialq and cado.

VBCurtis 2019-04-08 21:07

[QUOTE=henryzz;513150][CODE]skew 9.49, size 1.556e-14, alpha 2.799, combined = 5.387e-15 rroots = 2[/CODE]

I suspect we would likely do the polynomial selection with msieve regardless for any GNFS so there wouldn't be much difference there. We have done that before.

This should be a bit easier than you suggested I think(not sure about my snfs to gnfs difficulty conversion) 281*0.7=197. It should be a good test of the server/client system for CADO which I don't think we have used publicly on the forum yet. The postprocessing should also be a nice warmup rather than a months long job, Any estimates on memory required?

A nice side effect of choosing this composite is that I already have 8M relations from specialq upto 100k that I found while experimenting with small specialq and cado.[/QUOTE]

The poly score matches the record polys for GNFS199 or GNFS200. I agree this is a nice warmup for a team-CADO-sieve. Param selection depends on whether we go for I=16 and something like 8-12GB per job (split over, say, 4 threads so one job per typical client machine), or choose I=15 to reduce memory to 2.5-3GB per 4-threaded job at the expense of not really gaining insight into bigger jobs. If this job were our sole interest, I would think I=15 and 34-bit large primes, not least because I want experience with LP>33 in any case.

As for poly select, I think CADO's degree 6 optimizations make msieve second-best. I'm running gpu-msieve for a couple weeks on RSA232 just in case, but I expect CADO to produce better polys.

As for server memory requirements, the developers indicate quite a lot of memory is needed. I'll finish sieving a C186 at the end of april, and plan to closely track memory use in each CADO postprocessing stage on my 64GB machine. My hope is that an upgrade to 128GB RAM with a ~200GB swap partition on SSD is enough to filter any job smaller than C225.

Robert_JD 2019-04-25 19:13

Based upon my experience in successfully factoring RSA-200 in November of last year, such a job would take about 135GB of memory during the filtering/merge stage. I currently have only 128 RAM & was compelled to use a 256 flash drive as a swap partition. Moreover, according to Paul Zimmermann, memory requirements can be reduced by changing the parameter "tasks.filter.target_density from 170.0 to 150." Additional reductions in memory requirements can also be facilitated by reducing the " tasks.filter.purge.keep = 160" parameter. Paul made other suggestions & tips, but I can't remember those details presently, other than the fact that when RSA-220 had been factored, that job took just under 200GB of RAM.:smile:

R.D. Silverman 2019-12-18 16:13

[QUOTE=Robert_JD;514692]Based upon my experience in successfully factoring RSA-200 in November of last year, such a job would take about 135GB of memory during the filtering/merge stage. I currently have only 128 RAM & was compelled to use a 256 flash drive as a swap partition. Moreover, according to Paul Zimmermann, memory requirements can be reduced by changing the parameter "tasks.filter.target_density from 170.0 to 150." Additional reductions in memory requirements can also be facilitated by reducing the " tasks.filter.purge.keep = 160" parameter. Paul made other suggestions & tips, but I can't remember those details presently, other than the fact that when RSA-220 had been factored, that job took just under 200GB of RAM.:smile:[/QUOTE]

Huh? Assuming that the number under discussion is 732541^47-1, I must ask:
What is all this talk about polynomial selection?? This is a C276 SNFS job
and the obvious sextic polynomial should do nicely. This is not a large job by
current standards. lasievee can handle it readily.

henryzz 2019-12-19 09:47

[QUOTE=R.D. Silverman;533171]Huh? Assuming that the number under discussion is 732541^47-1, I must ask:
What is all this talk about polynomial selection?? This is a C276 SNFS job
and the obvious sextic polynomial should do nicely. This is not a large job by
current standards. lasievee can handle it readily.[/QUOTE]
I think that comment might have been going back to the original topic of RSA232.



732541^47-1 runs into bugs in lasieve that make it nearly unfactorable. CADO seems to be the best alternative.

orever 2020-01-06 16:42

Hello, first.
This text was written with the help of google translator.

I am entering this world of factoring, 6 months ago.

I have problems factoring a number of 1024 bits, I know it is too large.

So about polyselect, it ran for 99 days.

tasks.polyselect.P = 10000000
tasks.polyselect.admax = 1e8
tasks.polyselect.adrange = 250
tasks.polyselect.degree = 6
tasks.polyselect.incr = 60
tasks.polyselect.nq = 1296
tasks.polyselect.nrkeep = 100
tasks.polyselect.threads = 2


n: 1024 bits
skew: 3280536.395
# MurphyE (Bf = 1.43e + 09, Bg = 7.00e + 08, area = 3.07e + 18) = 4.29e-16

Is this result good?

What about sieve,

tasks.sieve.mfb0 = 70
tasks.sieve.mfb1 = 105
tasks.sieve.ncurves0 = 30
tasks.sieve.ncurves1 = 30
tasks.sieve.qrange = 1000
tasks.sieve.las.threads = 4
tasks.I = 16

Hundreds of found 0.
Info: Lattice Sieving: Found 0 relations in '', total is now 102/3794906671

VBCurtis 2020-01-06 21:34

The MurphyE score depends on other parameters that you didn't list. You also didn't list the polynomial, so I can't evaluate its quality myself.

You didn't list the large-prime bound parameters, nor lim0 and lim1, so I can't tell you how good or bad your param choices are.

I'd change I=16 to I=17, for a start.

If you know it is too large, why are you trying to factor it? Your machine isn't capable of finishing the job.

EDIT: Sorry, I had in mind a 768-bit number, not 1024-bit.
When the CADO group solved RSA768, they spent about 20 core-years on poly select. You spent 1 core-year (100 days * 4 cores, as a guess). So, you're about 5% of the way to "enough" for a number 1/1000th the difficulty of the one you're trying to factor.

So, a mere 500000 more core-days of poly select should be about right. Then, the sieving might take 20-30x that long, depending on how good your parameter choice is.

Your parameters are in the ballpark of decent for a 768-bit number, though.

orever 2020-01-07 00:09

[LEFT] Thanks for the answer.[/LEFT]
The complementary data:[LEFT]
POLY:
[code]
skew: 3280536.395
c0: -2963521726450804168545574989181604944974055470262264097100
c1: -8152690578728840862728149245184473131571416030581953
c2: 4530419292332128770249146516543999988943765890
c3: 87634441208849856388603554546291847879
c4: -844120661985043880048339652132070
c5: 13409199100611061897507722
c6: -59711977080
Y0: -108073099124290591101496444923166525774281367655542
Y1: 2355770915946536318263
# MurphyE (Bf=1.43e+09,Bg=7.00e+08,area=3.07e+18) = 4.29e-16
# f(x) = -59711977080*x^6+13409199100611061897507722*x^5-844120661985043880048339652132070*x^4+87634441208849856388603554546291847879*x^3+4530419292332128770249146516543999988943765890*x^2-8152690578728840862728149245184473131571416030581953*x-2963521726450804168545574989181604944974055470262264097100
# g(x) = 2355770915946536318263*x-108073099124290591101496444923166525774281367655542
[/code]



Param:

lim0 = 700000000
lim1 = 1430000000
lpb0 = 35
lpb1 = 36
maxfailed = 5000
name = c310
tasks.I = 16
tasks.maxtimedout = 3000
tasks.qmin = 1073741823
tasks.threads = 2
tasks.wutimeout = 24000
tasks.filter.maxlevel = 40
tasks.filter.target_density = 170.0
tasks.filter.purge.keep = 160
tasks.linalg.m = 64
tasks.linalg.n = 64
tasks.linalg.bwc.interleaving = 0
tasks.linalg.bwc.interval = 1000
tasks.linalg.characters.nchar = 50
tasks.polyselect.P = 10000000
tasks.polyselect.admax = 1e6
tasks.polyselect.adrange = 1000
tasks.polyselect.degree = 6
tasks.polyselect.incr = 60
tasks.polyselect.nq = 1296
tasks.polyselect.nrkeep = 100
tasks.polyselect.threads = 2
tasks.sieve.mfb0 = 70
tasks.sieve.mfb1 = 105
tasks.sieve.ncurves0 = 30
tasks.sieve.ncurves1 = 30
tasks.sieve.qrange = 1000
tasks.sieve.las.threads = 4[/LEFT]
I read on params.c240 in git:

tasks.polyselect.P = 20000000
tasks.polyselect.admax = 2e12
tasks.polyselect.adrange = 10000000
tasks.polyselect.incr = 110880
tasks.polyselect.nq = 1296 # this is 6^4
tasks.polyselect.nrkeep = 100
tasks.wutimeout = 7200
tasks.polyselect.sopteffort = 20
tasks.polyselect.ropteffort = 10

Read and your post,

[url]https://mersenneforum.org/showpost.php?p=531881&postcount=6[/url]

"I bet they'd have better-yet performance with nq of 7776 and admax around 3e11"
[LEFT]
[/LEFT]
Would these be better for a 1024 bits?
[LEFT] [/LEFT]
[LEFT] [/LEFT]

VBCurtis 2020-01-07 01:57

You're missing the point entirely- you have copied / used parameters from either RSA768 (a 232-digit number) or RSA-240. These are, you know, SIXTY or more digits smaller than the number you wish to factor.

If you tried to use good parameters from a 180-digit job to factor a 240-digit job, they would fail miserably. So will yours, for the same reasons.

1. You're not getting any relations because the siever is too small. You need I=18 for a kilobit number. One instance of this siever requires 120GB or more of memory.

2. You don't know what you are doing, so you don't know how to scale any of these settings for this job. Since this job is impossible for you to complete, this doesn't matter a whole lot; but if you happen to be pals with Ben Delo and share his level of hardware resources, you would factor a 140 digit number, then 160, then 180, then 200, then 220 and see how the settings change and memory needs change and time requirements change. Then you and Ben would realize 1024-bit GNFS is not a good idea, but maybe 250-digit GNFS is possible (it is).

3. There are discussions scattered around about trying to factor RSA-1024. You should find these papers or forum posts for ideas about how big a job this is, and what settings might be reasonable. Things like 4 large primes on one side & 3 on the other, 42-bit large primes (or 43, or 44- how much disk space do you have?). Once you have 20 or 50 TB of disk prepared for this task, then you can decide how to write a grant proposal to get access to a cluster with, say, 512GB of memory per node to attempt the postprocessing. That matrix might have a billion rows and a billion columns.

In closing, your settings *might* factor a 230-digit number in some reasonable number of decades, but they won't factor a kilobit number before you die.

EDIT: I pasted your poly into cownoise to see what its score was. Results: optimal skew 8832525.55458 score 7.74032539e-22


All times are UTC. The time now is 09:38.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.