mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > NFSNET Discussion

 
 
Thread Tools
Old 2004-05-15, 00:11   #1
JHansen
 
JHansen's Avatar
 
Apr 2004
Copenhagen, Denmark

22·29 Posts
Default The future of NFSNET

Paul once mentioned that the untimate goal of the NFSNET was a 1K bit factorization. Even though that might be a bit much to ask right now, does anybody know anything about when such a thing might be possible?

Can anyone give an estimate of how much computing power we have compaired to other experienced NFS factorers, i.e. CWI, Franke/Kleinjung/Bahr and Aoki/Kida/Shimoyama/sonoda/Ueda?


---
Best regards
Jes Hansen
JHansen is offline  
Old 2004-05-15, 00:34   #2
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

32×112 Posts
Default

The idea of a 1k-bit factorization is something that all of the groups that you have mentioned discuss from time-to-time. It is my opinion that when such a factorization is attempted, it is likely to be a collective effort rather than a direct competition between the groups.

Remember that CWI has provided critical software that we use and they did the LA for the original NFSNET. Both our NFSNET group and CWI were participants in the RSA factorization that Jens did last year, etc.

In theory, we could attempt to attack a very large factorization at any time. However, the sieving itself is only a part of the process. As you should be aware, there is often a wide disparity between a poorly chosen algorithm, or parameters, and an optimal choice. So rather than attempt a "brute force" attack involving hundreds or thousands of CPU-years, we are attempting to learn about the process so that we can make better selections.

2,811- is a case in point. We expended a few CPU-years in post-processing on what turned out to be a poor choice. It would have been even worse if the problem were even larger and it has been CPU-decades instead.
Wacky is offline  
Old 2004-05-17, 13:31   #3
TauCeti
 
TauCeti's Avatar
 
Mar 2003
Braunschweig, Germany

3428 Posts
Default

Recently i came along this article.

The authors try to give some numbers for the amount of work necessary to collect relations and for the linear algebra to factor a 1024-bit number with current technology.

They say:

Quote:
Using custom-built hardware to implement the improved circuit, the NFS matrix step becomes surprisingly inexpensive. However, the theoretical analysis shows that the cost of the relation collection step cannot be significantly reduced, regardless of the cost of the matrix step.
Their custom-build matrix-solving hardware only costs about 5k$ and solves the matrix in less then a day, but to collect the necessary relations, they estimate 150 million 1GHZ-CPU years.

So even if we assume a "special" CPU is 10 times as efficient per cycle as a general purpose CPU and takes 10 times less die space on the wafers, we still talk about 18 million "devices" to be able to collect relations fast enough to crack one 1024-bit RSA key every month ;)

But "so what"? Even if factoring 1024-bit is futile but for the decryption of some historical data encoded by some uninformed officials, my hope is, that the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm.
TauCeti is offline  
Old 2004-05-17, 14:01   #4
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

32×112 Posts
Default

We are talking about a 1024-bit SNFS factorization. 1024-bit GNFS factarizations are MUCH harder.
Wacky is offline  
Old 2004-05-17, 15:29   #5
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

1D5816 Posts
Wink

Quote:
Originally Posted by TauCeti
Recently i came along this article.

The authors try to give some numbers for the amount of work necessary to collect relations and for the linear algebra to factor a 1024-bit number with current technology.

They say:



Their custom-build matrix-solving hardware only costs about 5k$ and solves the matrix in less then a day, but to collect the necessary relations, they estimate 150 million 1GHZ-CPU years.

So even if we assume a "special" CPU is 10 times as efficient per cycle as a general purpose CPU and takes 10 times less die space on the wafers, we still talk about 18 million "devices" to be able to collect relations fast enough to crack one 1024-bit RSA key every month ;)

But "so what"? Even if factoring 1024-bit is futile but for the decryption of some historical data encoded by some uninformed officials, my hope is, that the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm.
(1) Their "custom hardware" consists of very high level handwaving. It is
vaporware. Design details are totally lacking in the paper. Extensive work
is required to finish the design.

(2) They analyze the time to solve matrices of two different proposed sizes;
a 'small' matrix' and a 'large matrix'. The large matrix is obtained by extrapolating the data from RSA-512. Even with their device, solving it would
take about "170,000" years. (comparable to a CRAY) While I agree that one can get a somewhat smaller matrix via more sieving, I have a strong theoretical argument that suggests the 'small matrix' can NOT be achieved in practice.
I showed, in a paper submitted to Math. Comp. that attempts to reduce the factor base size beyond a certain point result in a GREATER than exponential increase in the sieve time.
Further, actual real data gathered from actual factorizations shows that
while collecting additional relations can help reduce the matrix, that once
you have gathered more than about 25% over the minumum, one gets strong
diminishing returns in trying to reduce the size of the matrix. Yes, one can
reduces its SIZE, but its density (and hence storage) rapidly increases
beyond what can be managed.

(3) NFS has been well analyzed. Exactly what expertise do you have that
leads you to your belief that "the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm"???
Also, please explain how factoring a 1024 bit number via SNFS will "improve
the algorithm". What improvements did you have in mind? Any new algorithm
will arise from a new mathematical idea; not from doing parametric studies of
NFS. [which is not to say that such studies lack value. They have a lot of
value]

(4) The proposed cost of the hardware is ridiculous. One needs to do the
detailed architectural design and build a mask for a custom ASIC. One must
than fabricate it in what will be fairly low quantity. Exactly how much do you
think this will cost?

(5) The paper *IS* being studied in detail. I can not say more.
R.D. Silverman is offline  
Old 2004-05-17, 15:50   #6
xilman
Bamboozled!
 
xilman's Avatar
 
"๐’‰บ๐’ŒŒ๐’‡ท๐’†ท๐’€ญ"
May 2003
Down not across

3·3,907 Posts
Default

Quote:
Originally Posted by JHansen
Paul once mentioned that the untimate goal of the NFSNET was a 1K bit factorization. Even though that might be a bit much to ask right now, does anybody know anything about when such a thing might be possible?

Can anyone give an estimate of how much computing power we have compaired to other experienced NFS factorers, i.e. CWI, Franke/Kleinjung/Bahr and Aoki/Kida/Shimoyama/sonoda/Ueda?


---
Best regards
Jes Hansen
I'd phrase that as "a goal" of NFSNET. That is, it is something to aim for but neither the only nor the final goal.

As for computing power, I'd describe it as "comparable". All such teams vary quite considerably in their available power as time goes by. Long term, it tends to rise (but sometimes falls to zero ) but it can both rise and fall on a day-to-day or week-to-week basis. To see an example of this phenomenon, take a look at the NFSNET stats site.

Some teams form and dissolve on an ad hoc basis. A prime example is "The Cabal" which at times is by far the largest factoring team going but at others (now, for instance) isn't doing anything at all. It's not dead, it's only resting. I'm pretty sure that The Cabal will start up again for a kilobit SNFS, if not before.

Paul
xilman is offline  
Old 2004-05-17, 15:56   #7
xilman
Bamboozled!
 
xilman's Avatar
 
"๐’‰บ๐’ŒŒ๐’‡ท๐’†ท๐’€ญ"
May 2003
Down not across

3×3,907 Posts
Default

Quote:
Originally Posted by Bob Silverman

...

(5) The paper *IS* being studied in detail. I can not say more.
The paper has been studied in detail. I can say more.
Take a look at http://www.win.tue.nl/~klenstra/fac_1024RSA.pdf


Paul
xilman is offline  
Old 2004-05-17, 17:02   #8
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

751210 Posts
Default

Quote:
Originally Posted by xilman
The paper has been studied in detail. I can say more.
Take a look at http://www.win.tue.nl/~klenstra/fac_1024RSA.pdf


Paul
Hi Paul,

It is a very nice paper. However, it addresses only the sieving.

This paper does come to the conclusion (which I claimed when the hardware
was first proposed; I got pooh-poohed) that the smoothness bounds in the
original proposal were unrealistic. Allow me to suggest that there would be
considerable merit in discussing the impact of the new estimates upon the
matrix size and hence upon the time needed to solve it with custom
hardware. How do your new smoothness estimates impact the proposed
'small matrix'?

Furthermore, the hardware design (for both siever & solver) still lacks sufficient detail for an implementation. It will take a lot of work to fill
in the detail. This work is being performed. (I can't say more)

Note that work done by S. Cavallar and P. Montgomery at CWI showed that
even at the 800-bit level, (based on actual performance) 3 large primes
were too many (for 2,773+ and other experiments). My feeling is that
for a general 1024 bit number, 3 large primes would be useful, but this is
still unclear. rhetorical Q: Would we use 3 on each side of the congruence?

It was asked earlier why we (the entire NFS community) just didn't go for a
1024-bit number. The answer is that we would be unsure as how to choose
the parameters. Instead, we must slowly increase the size of the numbers we
do so we can see the impact of our parameter choices.
R.D. Silverman is offline  
Old 2004-05-18, 15:10   #9
TauCeti
 
TauCeti's Avatar
 
Mar 2003
Braunschweig, Germany

2·113 Posts
Default

Hello!

First: I am not a mathematican, i studied computer science. And right now i am not even working in research projects at all. So please keep that in mind if i write about Math here ;) I know most of the regulars here have much more experience and understanding of the topics discussed. Feel free to tell me if i make a fool of myself.

That having said...

Quote:
Originally Posted by Bob Silverman
(2) [Small Matrix]
Well, for me it looks like the ability to build that "small matrix" is the key requirement of the whole paper. And i - perhaps wrongfully - assumed that their estimates (they reduce the matrix column count from 10^10 to 4*10^7 and only increase the time to collect the relations about a factor of 5) are correct.

I read the paper again. I do not understand how they calculate the factor "5". Specifically i could not find the effect in their paper, that smooth relations are not uniformly found but "thin out" towards the "edges" in the "search rectangle".
Don't you need a wider search region if you lower the factor base and don't you then have at least two nonlinear effects (increase of search-space and relative density of 'hits') that negativly affect the yield?

One other remark. In section 5.1 of the paper they write:

"For the factorization of RSA-512 the matrix had about 6.7 million columns and average column density about 63 [3]. There is no doubt that this matrix is considerably smaller than a matrix that would have resulted from ordinary relation collection as defined in 2.6, cf. Remark 3.7. Nevertheless, we make the optimistic assumption that this is the size that would result from ordinary relation collection."

I don't understand the part with "optimistic assumption". They design their "small matrix" using a smaller factor base from a "large matrix" that _already_ was generated using a smaller factor base?! So isn't that the 'pessimistic assumtion' and they need to blow up their "large matrix" first to "ordinary relation collection" to be on the safe side?

Quote:
Originally Posted by Bob Silverman
(3) NFS has been well analyzed. Exactly what expertise do you have that
leads you to your belief that "the process of improving the algorithm may give some insight into the innermost properties of the factorisation problem and maybe ultimatly lead to a "new", more efficient algorithm"???
Also, please explain how factoring a 1024 bit number via SNFS will "improve
the algorithm". What improvements did you have in mind? Any new algorithm
will arise from a new mathematical idea; not from doing parametric studies of
NFS. [which is not to say that such studies lack value. They have a lot of
value]
First: I have no mathematical expertise at all but my small personal background in computer science. But i find it scientifically and philosophically challenging to discuss _why_ there seem to be different complexity classes for different problems. That's the main reason factoring fascinates me.

i agree that factoring a 1024 bit number via SNFS won't improve the algorithm at all but just the process to achieve this (arbitrary) goal may produce one or two really knew ideas somewhere.

Now GNFS: Mike Duvos wrote in1995: "GNFS probably represents the final step in the evolution of the "combination of congruences" factoring methods. Further refinements would probably be such complicated algorithms as to be inpractical to program."

So with GNFS as the 'best' but also 'most complicated' algorithm available i ask myself if GNFS is in a sense a 'dead end' or not and agree with you that new algorithms will arise from a new mathematical ideas. But how to _get_ new ideas?

I believe that understanding the GNFS is a big challenge (at least for me). And perhaps the question why (AFAIK) the process of the optimal selection of the polynomials and/or factor bases is still not fully understood is not just a technical problem with GNFS but - if better understood in the future - may provide a really new approach to factoring.

So how - for example - about distributed parameter optimisation in GNFS?
That means to view Parameter selection as a nonlinear optimization problem and use classic global optimization techniqes (e.g. simulated annealing etc.) to find 'good' combinations of polynomials and factor bases?

Would that be an interesting approach or would you not expect any insights from more or less "random" permutations of the parameter set of the GNFS?

Quote:
Originally Posted by Bob Silverman
(4) The proposed cost of the hardware is ridiculous. One needs to do the
detailed architectural design and build a mask for a custom ASIC. One must
than fabricate it in what will be fairly low quantity. Exactly how much do you
think this will cost?
I have no real idea. Sources in the Internet indicate > 1 Million $ for traditional ASIC design including minimum order quantitys excluding design/test work. Looks like the authors of the paper talked about unit-costs in high-volume production. Hmmm.

Quote:
Originally Posted by Bob Silverman
The paper *IS* being studied in detail. I can not say more.
I don't understand the 'I can not say more' part but nice discussion here nontheless ;)

Tau
TauCeti is offline  
Old 2004-05-20, 15:33   #10
R.D. Silverman
 
R.D. Silverman's Avatar
 
"Bob Silverman"
Nov 2003
North of Boston

23·3·313 Posts
Default

If you send me a private email (rsilverman@draper.com) I will send
you a paper detailing the parametric issues with NFS. They are too
long and involved to discuss here.

The bottom line is that I have a proof that if one tries to force the
factor base to be too small (I quantify what that means in the paper),
then the size of the sieve region (and hence the sieve run time) starts
to increase FASTER than exponentially. One can be a little bit wrong
about the factor base size, but if you guess too small then you wind up
paying a BIG penalty in sieve time. OTOH, if the factor base is too big, then
the sieve time only increases LINEARLY with the increase in factor base size.

At Crypto '97 I gave a talk in which I showed that at least for SNFS, that
the o(1) term in the run time was small (at least for my implementation).
It was something like .03 to .05 near 160 digits. We can not be sure that this
carries over to GNFS; the latter is much more sensitive to good polynomial
selection. However, *if* the o(1) term is small near (say) 512 bits,
then we can predict the size of the factor base needed for 1024 bits by
computing L(2^1024)/L(2^512) * actual size used for RSA-512.

There are indications (good ones) that the fbase used for RSA-512 was
smaller than optimal (they used the same size as for RSA-140; 15 digits
smaller). Thus, for RSA-512 the fbase size was already on "the wrong side"
of the optimization curve. We might therefore expect that extrapolating to
1024 bits using the RSA-512 data would similarly be "on the wrong side".
A naiive extrapolation gives an fbase size that yields the "large matrix"
from the Lenstra, Shamir, Tromer, et.al. paper. The "large matrix" was
way beyond what even their custom hardware could handle.

Others may, of course, reasonably disagree with my estimates. However,
I still maintain that RSA-1024 is much too hard to do even with the the
proposed custom hardware. However, I would be quite pleased if someone
builds the hardware and proves me wrong.
R.D. Silverman is offline  
Old 2004-05-20, 15:53   #11
xilman
Bamboozled!
 
xilman's Avatar
 
"๐’‰บ๐’ŒŒ๐’‡ท๐’†ท๐’€ญ"
May 2003
Down not across

3·3,907 Posts
Default

Quote:
Originally Posted by Bob Silverman
There are indications (good ones) that the fbase used for RSA-512 was
smaller than optimal (they used the same size as for RSA-140; 15 digits
smaller). Thus, for RSA-512 the fbase size was already on "the wrong side"
of the optimization curve. We might therefore expect that extrapolating to
1024 bits using the RSA-512 data would similarly be "on the wrong side".
A naiive extrapolation gives an fbase size that yields the "large matrix"
from the Lenstra, Shamir, Tromer, et.al. paper. The "large matrix" was
way beyond what even their custom hardware could handle.

Others may, of course, reasonably disagree with my estimates. However,
I still maintain that RSA-1024 is much too hard to do even with the the
proposed custom hardware. However, I would be quite pleased if someone
builds the hardware and proves me wrong.
I'm with Bob on this one. There is no doubt that the RSA-512 factor bases were too small. There were, however, good reasons for the choice.

A major reason is that a number of contributors were using Arjen Lenstra's lattice siever which had been written with RSA-130 in mind -- 25 digits smaller. A decision was taken at that time, years before RSA-512 was done, to limit factor base primes to 24 bits. This made a great deal of sense in the days of 32-bit machines and typical memory sizes of 16 megabytes. However, a sensible factor base size for a 130-digit integer is far from optimal for a 155-digit integer, as long as one is optimizing only computational effort.

As soon as other quantities are added to the collection to be optimized, other considerations come in to play. One which is quite important in real life, as opposed to theoretical computer science, is time-to-market. Arjen's siever was available and capable of doing useful work.

Another consideration was data handling. The larger the factor bases and, especially, the larger the "large primes", the greater is the amount of data to be transmitted, stored, managed and processed. I remember Steffi Cavallar, the person responsible for looking after most of the data, being kept very busy with the manangement of gigabytes of data held in tens of thousands of files. My experiences with recent NFSNET datasets have given me a very thorough understanding of what she went through. Richard Wackerbarth very likely also understands, as does Jens Franke as a consequence of his RSA-576 marathon.

Summary: computation is not the only quantity that needs optimization.

Paul
xilman is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
nfsnet.org updates Nichtwahr NFSNET Discussion 16 2007-04-27 14:45
nfsnet down fivemack NFSNET Discussion 9 2007-04-23 14:55
Questions about NFSNET... WraithX NFSNET Discussion 2 2007-03-03 23:56
Questions about NFSNET koders333 NFSNET Discussion 4 2005-09-26 13:19
http://www.nfsnet.org/downloads/nfsnet-04145-powerpc-MacOSX.tgz Death NFSNET Discussion 15 2004-06-22 07:35

All times are UTC. The time now is 08:48.


Fri Mar 24 08:48:13 UTC 2023 up 218 days, 6:16, 0 users, load averages: 1.24, 0.95, 0.86

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

โ‰  ยฑ โˆ“ รท ร— ยท โˆ’ โˆš โ€ฐ โŠ— โŠ• โŠ– โŠ˜ โŠ™ โ‰ค โ‰ฅ โ‰ฆ โ‰ง โ‰จ โ‰ฉ โ‰บ โ‰ป โ‰ผ โ‰ฝ โŠ โА โŠ‘ โŠ’ ยฒ ยณ ยฐ
โˆ  โˆŸ ยฐ โ‰… ~ โ€– โŸ‚ โซ›
โ‰ก โ‰œ โ‰ˆ โˆ โˆž โ‰ช โ‰ซ โŒŠโŒ‹ โŒˆโŒ‰ โˆ˜ โˆ โˆ โˆ‘ โˆง โˆจ โˆฉ โˆช โจ€ โŠ• โŠ— ๐–• ๐–– ๐–— โŠฒ โŠณ
โˆ… โˆ– โˆ โ†ฆ โ†ฃ โˆฉ โˆช โІ โŠ‚ โŠ„ โŠŠ โЇ โŠƒ โŠ… โŠ‹ โŠ– โˆˆ โˆ‰ โˆ‹ โˆŒ โ„• โ„ค โ„š โ„ โ„‚ โ„ต โ„ถ โ„ท โ„ธ ๐“Ÿ
ยฌ โˆจ โˆง โŠ• โ†’ โ† โ‡’ โ‡ โ‡” โˆ€ โˆƒ โˆ„ โˆด โˆต โŠค โŠฅ โŠข โŠจ โซค โŠฃ โ€ฆ โ‹ฏ โ‹ฎ โ‹ฐ โ‹ฑ
โˆซ โˆฌ โˆญ โˆฎ โˆฏ โˆฐ โˆ‡ โˆ† ฮด โˆ‚ โ„ฑ โ„’ โ„“
๐›ข๐›ผ ๐›ฃ๐›ฝ ๐›ค๐›พ ๐›ฅ๐›ฟ ๐›ฆ๐œ€๐œ– ๐›ง๐œ ๐›จ๐œ‚ ๐›ฉ๐œƒ๐œ— ๐›ช๐œ„ ๐›ซ๐œ… ๐›ฌ๐œ† ๐›ญ๐œ‡ ๐›ฎ๐œˆ ๐›ฏ๐œ‰ ๐›ฐ๐œŠ ๐›ฑ๐œ‹ ๐›ฒ๐œŒ ๐›ด๐œŽ๐œ ๐›ต๐œ ๐›ถ๐œ ๐›ท๐œ™๐œ‘ ๐›ธ๐œ’ ๐›น๐œ“ ๐›บ๐œ”