mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Operation Kibibit

Reply
 
Thread Tools
Old 2012-07-25, 01:51   #23
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

1101000011012 Posts
Default

Quote:
Originally Posted by Dubslow View Post

Edit: I've been meaning to ask, what about CADO-NFS? Since it's still in development, theoretically it would be easier to extend to kilobit stuff than the GGNFS siever (and it'd certainly be easier than starting from scratch).
Possibly so. But I don't know that CADO is open for general contributors - it seems to be mostly the work of graduate students and I get the feeling that they want to keep it that way. I could be wrong, of course.

Quote:
Originally Posted by Dubslow View Post
Aside: Why bother starting a new project at all instead of contributing to existing tools? Is it because GGNFS and Msieve were initially separate projects (and still sort of are)? It seems to me that most sensical thing would be to have everybody work on only one piece of software.
That would be nice, sure. :) But because the various projects were not designed with any other project in mind, it is not entirely trivial to merge them. For instance, this. We started out strong at first but activity has been low lately. I'm still poking around at it in the background, but its tough to get two large complex codebases to play nicely together, let alone 3 or 4.

Another reason is momentum. At least for me, it was easier (and more fun) to start designing/writing a simple QS routine and build/optimize from there then to try to be immediately productive with a vastly complex thing like msieve. That's why we have yafu today and not a slightly faster msieve. Shallow vs. steep learning curve. This may or not make sense with something as complex as a lattice sieve when something like CADO exists.
bsquared is offline   Reply With Quote
Old 2012-07-25, 02:07   #24
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

160358 Posts
Default

Quote:
Originally Posted by bsquared View Post
Possibly so. But I don't know that CADO is open for general contributors - it seems to be mostly the work of graduate students and I get the feeling that they want to keep it that way. I could be wrong, of course.



That would be nice, sure. :) But because the various projects were not designed with any other project in mind, it is not entirely trivial to merge them. For instance, this. We started out strong at first but activity has been low lately. I'm still poking around at it in the background, but its tough to get two large complex codebases to play nicely together, let alone 3 or 4.

Another reason is momentum. At least for me, it was easier (and more fun) to start designing/writing a simple QS routine and build/optimize from there then to try to be immediately productive with a vastly complex thing like msieve. That's why we have yafu today and not a slightly faster msieve. Shallow vs. steep learning curve. This may or not make sense with something as complex as a lattice sieve when something like CADO exists.
I'm not saying do any merges, I'm just saying have only one project in active development. (Assuming CADO is open to contributors,) Why should jasonp be working on his own polyselect while Shi Bai is working on his own, completely separate polyselect? Wouldn't they both get further if they worked together? Perhaps for LinAlg it doesn't make sense for two roughly equal algorithms, but for the most part it still puzzles me.

And btw, what about (the unbanned, apparently) RDS's lattice siever? How optimized/extensible is that?


Last fiddled with by Dubslow on 2012-07-25 at 02:09 Reason: s/confuses/puzzles (also needed more smileys)
Dubslow is offline   Reply With Quote
Old 2012-07-25, 02:30   #25
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

13×257 Posts
Default

Quote:
Originally Posted by Dubslow View Post

And btw, what about (the unbanned, apparently) RDS's lattice siever? How optimized/extensible is that?

If he's unbanned then he can make his own comments, but I don't believe it is any more ready for kilobit jobs then ggnfs. It is probably more easily extensible because it is far easier to read, but it is also starting on unequal footing because it uses a slower methodology to construct the reduced lattice basis.
bsquared is offline   Reply With Quote
Old 2012-07-25, 03:59   #26
ixfd64
Bemusing Prompter
 
ixfd64's Avatar
 
"Danny"
Dec 2002
California

7×331 Posts
Default

Quote:
Originally Posted by bsquared View Post
Possibly so. But I don't know that CADO is open for general contributors - it seems to be mostly the work of graduate students and I get the feeling that they want to keep it that way. I could be wrong, of course.
But as CADO-NFS is licensed under the GPL, there is nothing to prevent someone from forking the project.
ixfd64 is offline   Reply With Quote
Old 2012-07-25, 04:01   #27
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×2,399 Posts
Default

Quote:
Originally Posted by ixfd64 View Post
But as CADO-NFS is licensed under the GPL, there is nothing to prevent someone from forking the project.
But that's exactly the opposite point to the one I was trying to make. We (the developers, really) should be cooperating, not working separately.
Dubslow is offline   Reply With Quote
Old 2012-07-25, 06:10   #28
debrouxl
 
debrouxl's Avatar
 
Sep 2009

3D116 Posts
Default

Quote:
Originally Posted by jasonp
Adding a polyselect binary to NFS@Home would be a neat idea, and I'd be happy to make modest structural changes to the Msieve source in order to make the task easier (tuning up the GPU code for mass appeal is not a modest structural change, maybe jrk could volunteer some time). Polyselect would make an ideal distributed application, it's low-memory and highly compute intensive.
Heh, you beat me to suggesting the usage of BOINC distributed computing technology for that task
It worked so well for RSALS, NFS@Home, yoyo@home and yafu, all of which are higher-memory than polynomial selection would be (at least, when yafu is processing C110 and higher).

Should stage 1 and stage 2 of polynomial selection be separated ? I mean, collecting all stage 1 hits, then keeping several hundreds / thousands / dozens of thousands of the best hits, and run the full stage 2 (root sieve, etc.) on these hits separately.
This way, if the polynomial selection algorithms / parameters are tuned as a result of post-processing the first dozens of core-years, we could boil the best stage 1 hits multiple times.

Last fiddled with by debrouxl on 2012-07-25 at 06:16
debrouxl is offline   Reply With Quote
Old 2012-07-25, 11:39   #29
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3×1,163 Posts
Default

The best way to split poly selection is to have one group of computers run stage 1 plus the size optimization part of stage 2, which is a lot slower for degree 6 but still pretty quick in absolute terms and 100x faster than the root sieve. The code could be modified to save the top 10 or so optimized polynomials from each node, which get returned to the server and spot-checked. With RSA768 only about 1 in 100000 stage 1 hits were good enough to run the root sieve in stage 2, and for a much larger input the survival rate will be exponentially smaller.
jasonp is offline   Reply With Quote
Old 2012-07-25, 19:09   #30
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

33×172 Posts
Default

We believe that the coordination of a task such as this will be even more interesting, and noteworthy, because everybody is not in the same place. It will require some very good organization.

BTW, if "Operation Kibibit" succeeds, we will commission another extraordinary run of limited edition buttons.

Xyzzy is offline   Reply With Quote
Old 2012-07-25, 20:38   #31
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

32×5×197 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
BTW, if "Operation Kibibit" succeeds, we will commission another extraordinary run of limited edition buttons.
I think that we will find 2 more Mersenne Primes before Operation Kibbles and Bits succeeds.
Uncwilly is online now   Reply With Quote
Old 2012-07-25, 21:01   #32
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

719710 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
We believe that the coordination of a task such as this will be even more interesting, and noteworthy, because everybody is not in the same place. It will require some very good organization.

BTW, if "Operation Kibibit" succeeds, we will commission another extraordinary run of limited edition buttons.
Well, I think we needn't worry too much about dedication. Xyzzy, me, xilman, and presumably bsquared and jasonp appear (willing) to be committed. It seems the only problem we have is lack of software...

Quote:
Originally Posted by jasonp View Post
The best way to split poly selection is to have one group of computers run stage 1 plus the size optimization part of stage 2, which is a lot slower for degree 6 but still pretty quick in absolute terms and 100x faster than the root sieve. The code could be modified to save the top 10 or so optimized polynomials from each node, which get returned to the server and spot-checked. With RSA768 only about 1 in 100000 stage 1 hits were good enough to run the root sieve in stage 2, and for a much larger input the survival rate will be exponentially smaller.
...but at least for poly select it sounds doable, with the help of RSALS/NFS@home. (I'd recommend the former, simply because it was made for cracking RSA numbers )

jasonp, how long do you think it would take to modify the poly select code?

More importantly, we need to figure out where to get a siever.

1) Start from scratch
2) Modify/extend GGNFS
2) Modify/extend CADO's siever
3) Optimize/extend RDS's siever

I can't comment on such things though.

Last fiddled with by Dubslow on 2012-07-25 at 21:02 Reason: dropping pic
Dubslow is offline   Reply With Quote
Old 2012-07-25, 21:09   #33
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

3·37·43 Posts
Default

Quote:
Originally Posted by debrouxl View Post
Heh, you beat me to suggesting the usage of BOINC distributed computing technology for that task
It worked so well for RSALS, NFS@Home, yoyo@home and yafu, all of which are higher-memory than polynomial selection would be (at least, when yafu is processing C110 and higher).

Should stage 1 and stage 2 of polynomial selection be separated ? I mean, collecting all stage 1 hits, then keeping several hundreds / thousands / dozens of thousands of the best hits, and run the full stage 2 (root sieve, etc.) on these hits separately.
This way, if the polynomial selection algorithms / parameters are tuned as a result of post-processing the first dozens of core-years, we could boil the best stage 1 hits multiple times.
I think we have here a starting point.
pinhodecarlos is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Where should I start? christian_ Information & Answers 9 2016-01-22 19:28
Where to start Jellyfish420 Homework Help 46 2013-02-06 13:51
How to start? Thomas11 Lone Mersenne Hunters 29 2008-12-21 13:47
how to start with P-1? ValerieVonck Marin's Mersenne-aries 8 2006-04-29 22:21
How to start? OmbooHankvald Factoring 15 2005-09-03 13:42

All times are UTC. The time now is 16:15.

Tue Nov 24 16:15:39 UTC 2020 up 75 days, 13:26, 4 users, load averages: 1.59, 1.78, 1.75

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.