![]() |
![]() |
#1 |
Apr 2004
Copenhagen, Denmark
22·29 Posts |
![]()
Paul once mentioned that the untimate goal of the NFSNET was a 1K bit factorization. Even though that might be a bit much to ask right now, does anybody know anything about when such a thing might be possible?
Can anyone give an estimate of how much computing power we have compaired to other experienced NFS factorers, i.e. CWI, Franke/Kleinjung/Bahr and Aoki/Kida/Shimoyama/sonoda/Ueda? --- Best regards Jes Hansen |
![]() |
![]() |
#2 |
Jun 2003
The Texas Hill Country
32×112 Posts |
![]()
The idea of a 1k-bit factorization is something that all of the groups that you have mentioned discuss from time-to-time. It is my opinion that when such a factorization is attempted, it is likely to be a collective effort rather than a direct competition between the groups.
Remember that CWI has provided critical software that we use and they did the LA for the original NFSNET. Both our NFSNET group and CWI were participants in the RSA factorization that Jens did last year, etc. In theory, we could attempt to attack a very large factorization at any time. However, the sieving itself is only a part of the process. As you should be aware, there is often a wide disparity between a poorly chosen algorithm, or parameters, and an optimal choice. So rather than attempt a "brute force" attack involving hundreds or thousands of CPU-years, we are attempting to learn about the process so that we can make better selections. 2,811- is a case in point. We expended a few CPU-years in post-processing on what turned out to be a poor choice. It would have been even worse if the problem were even larger and it has been CPU-decades instead. |
![]() |
![]() |
#3 | |
Mar 2003
Braunschweig, Germany
3428 Posts |
![]()
Recently i came along this article.
The authors try to give some numbers for the amount of work necessary to collect relations and for the linear algebra to factor a 1024-bit number with current technology. They say: Quote:
So even if we assume a "special" CPU is 10 times as efficient per cycle as a general purpose CPU and takes 10 times less die space on the wafers, we still talk about 18 million "devices" to be able to collect relations fast enough to crack one 1024-bit RSA key every month ;) But "so what"? Even if factoring 1024-bit is futile but for the decryption of some historical data encoded by some uninformed officials, my hope is, that the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm. |
|
![]() |
![]() |
#4 |
Jun 2003
The Texas Hill Country
32×112 Posts |
![]()
We are talking about a 1024-bit SNFS factorization. 1024-bit GNFS factarizations are MUCH harder.
|
![]() |
![]() |
#5 | |
"Bob Silverman"
Nov 2003
North of Boston
1D5816 Posts |
![]() Quote:
vaporware. Design details are totally lacking in the paper. Extensive work is required to finish the design. (2) They analyze the time to solve matrices of two different proposed sizes; a 'small' matrix' and a 'large matrix'. The large matrix is obtained by extrapolating the data from RSA-512. Even with their device, solving it would take about "170,000" years. (comparable to a CRAY) While I agree that one can get a somewhat smaller matrix via more sieving, I have a strong theoretical argument that suggests the 'small matrix' can NOT be achieved in practice. I showed, in a paper submitted to Math. Comp. that attempts to reduce the factor base size beyond a certain point result in a GREATER than exponential increase in the sieve time. Further, actual real data gathered from actual factorizations shows that while collecting additional relations can help reduce the matrix, that once you have gathered more than about 25% over the minumum, one gets strong diminishing returns in trying to reduce the size of the matrix. Yes, one can reduces its SIZE, but its density (and hence storage) rapidly increases beyond what can be managed. (3) NFS has been well analyzed. Exactly what expertise do you have that leads you to your belief that "the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm"??? Also, please explain how factoring a 1024 bit number via SNFS will "improve the algorithm". What improvements did you have in mind? Any new algorithm will arise from a new mathematical idea; not from doing parametric studies of NFS. [which is not to say that such studies lack value. They have a lot of value] (4) The proposed cost of the hardware is ridiculous. One needs to do the detailed architectural design and build a mask for a custom ASIC. One must than fabricate it in what will be fairly low quantity. Exactly how much do you think this will cost? (5) The paper *IS* being studied in detail. I can not say more. |
|
![]() |
![]() |
#6 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3·3,907 Posts |
![]() Quote:
As for computing power, I'd describe it as "comparable". All such teams vary quite considerably in their available power as time goes by. Long term, it tends to rise (but sometimes falls to zero ![]() Some teams form and dissolve on an ad hoc basis. A prime example is "The Cabal" which at times is by far the largest factoring team going but at others (now, for instance) isn't doing anything at all. It's not dead, it's only resting. I'm pretty sure that The Cabal will start up again for a kilobit SNFS, if not before. Paul |
|
![]() |
![]() |
#7 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3×3,907 Posts |
![]() Quote:
![]() Take a look at http://www.win.tue.nl/~klenstra/fac_1024RSA.pdf Paul |
|
![]() |
![]() |
#8 | |
"Bob Silverman"
Nov 2003
North of Boston
751210 Posts |
![]() Quote:
It is a very nice paper. However, it addresses only the sieving. This paper does come to the conclusion (which I claimed when the hardware was first proposed; I got pooh-poohed) that the smoothness bounds in the original proposal were unrealistic. Allow me to suggest that there would be considerable merit in discussing the impact of the new estimates upon the matrix size and hence upon the time needed to solve it with custom hardware. How do your new smoothness estimates impact the proposed 'small matrix'? Furthermore, the hardware design (for both siever & solver) still lacks sufficient detail for an implementation. It will take a lot of work to fill in the detail. This work is being performed. (I can't say more) Note that work done by S. Cavallar and P. Montgomery at CWI showed that even at the 800-bit level, (based on actual performance) 3 large primes were too many (for 2,773+ and other experiments). My feeling is that for a general 1024 bit number, 3 large primes would be useful, but this is still unclear. ![]() It was asked earlier why we (the entire NFS community) just didn't go for a 1024-bit number. The answer is that we would be unsure as how to choose the parameters. Instead, we must slowly increase the size of the numbers we do so we can see the impact of our parameter choices. |
|
![]() |
![]() |
#9 | ||||
Mar 2003
Braunschweig, Germany
2·113 Posts |
![]()
Hello!
First: I am not a mathematican, i studied computer science. And right now i am not even working in research projects at all. So please keep that in mind if i write about Math here ;) I know most of the regulars here have much more experience and understanding of the topics discussed. Feel free to tell me if i make a fool of myself. That having said... Quote:
I read the paper again. I do not understand how they calculate the factor "5". Specifically i could not find the effect in their paper, that smooth relations are not uniformly found but "thin out" towards the "edges" in the "search rectangle". Don't you need a wider search region if you lower the factor base and don't you then have at least two nonlinear effects (increase of search-space and relative density of 'hits') that negativly affect the yield? One other remark. In section 5.1 of the paper they write: "For the factorization of RSA-512 the matrix had about 6.7 million columns and average column density about 63 [3]. There is no doubt that this matrix is considerably smaller than a matrix that would have resulted from ordinary relation collection as defined in 2.6, cf. Remark 3.7. Nevertheless, we make the optimistic assumption that this is the size that would result from ordinary relation collection." I don't understand the part with "optimistic assumption". They design their "small matrix" using a smaller factor base from a "large matrix" that _already_ was generated using a smaller factor base?! So isn't that the 'pessimistic assumtion' and they need to blow up their "large matrix" first to "ordinary relation collection" to be on the safe side? Quote:
i agree that factoring a 1024 bit number via SNFS won't improve the algorithm at all but just the process to achieve this (arbitrary) goal may produce one or two really knew ideas somewhere. Now GNFS: Mike Duvos wrote in1995: "GNFS probably represents the final step in the evolution of the "combination of congruences" factoring methods. Further refinements would probably be such complicated algorithms as to be inpractical to program." So with GNFS as the 'best' but also 'most complicated' algorithm available i ask myself if GNFS is in a sense a 'dead end' or not and agree with you that new algorithms will arise from a new mathematical ideas. But how to _get_ new ideas? I believe that understanding the GNFS is a big challenge (at least for me). And perhaps the question why (AFAIK) the process of the optimal selection of the polynomials and/or factor bases is still not fully understood is not just a technical problem with GNFS but - if better understood in the future - may provide a really new approach to factoring. So how - for example - about distributed parameter optimisation in GNFS? That means to view Parameter selection as a nonlinear optimization problem and use classic global optimization techniqes (e.g. simulated annealing etc.) to find 'good' combinations of polynomials and factor bases? Would that be an interesting approach or would you not expect any insights from more or less "random" permutations of the parameter set of the GNFS? Quote:
Quote:
Tau |
||||
![]() |
![]() |
#10 |
"Bob Silverman"
Nov 2003
North of Boston
23·3·313 Posts |
![]()
If you send me a private email (rsilverman@draper.com) I will send
you a paper detailing the parametric issues with NFS. They are too long and involved to discuss here. The bottom line is that I have a proof that if one tries to force the factor base to be too small (I quantify what that means in the paper), then the size of the sieve region (and hence the sieve run time) starts to increase FASTER than exponentially. One can be a little bit wrong about the factor base size, but if you guess too small then you wind up paying a BIG penalty in sieve time. OTOH, if the factor base is too big, then the sieve time only increases LINEARLY with the increase in factor base size. At Crypto '97 I gave a talk in which I showed that at least for SNFS, that the o(1) term in the run time was small (at least for my implementation). It was something like .03 to .05 near 160 digits. We can not be sure that this carries over to GNFS; the latter is much more sensitive to good polynomial selection. However, *if* the o(1) term is small near (say) 512 bits, then we can predict the size of the factor base needed for 1024 bits by computing L(2^1024)/L(2^512) * actual size used for RSA-512. There are indications (good ones) that the fbase used for RSA-512 was smaller than optimal (they used the same size as for RSA-140; 15 digits smaller). Thus, for RSA-512 the fbase size was already on "the wrong side" of the optimization curve. We might therefore expect that extrapolating to 1024 bits using the RSA-512 data would similarly be "on the wrong side". A naiive extrapolation gives an fbase size that yields the "large matrix" from the Lenstra, Shamir, Tromer, et.al. paper. The "large matrix" was way beyond what even their custom hardware could handle. Others may, of course, reasonably disagree with my estimates. However, I still maintain that RSA-1024 is much too hard to do even with the the proposed custom hardware. However, I would be quite pleased if someone builds the hardware and proves me wrong. ![]() ![]() ![]() |
![]() |
![]() |
#11 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3·3,907 Posts |
![]() Quote:
A major reason is that a number of contributors were using Arjen Lenstra's lattice siever which had been written with RSA-130 in mind -- 25 digits smaller. A decision was taken at that time, years before RSA-512 was done, to limit factor base primes to 24 bits. This made a great deal of sense in the days of 32-bit machines and typical memory sizes of 16 megabytes. However, a sensible factor base size for a 130-digit integer is far from optimal for a 155-digit integer, as long as one is optimizing only computational effort. As soon as other quantities are added to the collection to be optimized, other considerations come in to play. One which is quite important in real life, as opposed to theoretical computer science, is time-to-market. Arjen's siever was available and capable of doing useful work. Another consideration was data handling. The larger the factor bases and, especially, the larger the "large primes", the greater is the amount of data to be transmitted, stored, managed and processed. I remember Steffi Cavallar, the person responsible for looking after most of the data, being kept very busy with the manangement of gigabytes of data held in tens of thousands of files. My experiences with recent NFSNET datasets have given me a very thorough understanding of what she went through. Richard Wackerbarth very likely also understands, as does Jens Franke as a consequence of his RSA-576 marathon. Summary: computation is not the only quantity that needs optimization. Paul |
|
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
nfsnet.org updates | Nichtwahr | NFSNET Discussion | 16 | 2007-04-27 14:45 |
nfsnet down | fivemack | NFSNET Discussion | 9 | 2007-04-23 14:55 |
Questions about NFSNET... | WraithX | NFSNET Discussion | 2 | 2007-03-03 23:56 |
Questions about NFSNET | koders333 | NFSNET Discussion | 4 | 2005-09-26 13:19 |
http://www.nfsnet.org/downloads/nfsnet-04145-powerpc-MacOSX.tgz | Death | NFSNET Discussion | 15 | 2004-06-22 07:35 |