20101105, 22:45  #1 
"Bob Silverman"
Nov 2003
North of Boston
1110101010100_{2} Posts 
Reservations
Hi,
Here is 2,2166L C201 = p89.p112 Credit goes to Greg for doing the liner algebra. p89 = 92773717776724293033129569511474025501212273295210910099457850169016216770560658749545493 p112 = 7972809260662298457419192524616051892080791400877603127506516722916348870345784103070515656587116007776553127153 Sieving for 2,1870L is in progress. It seems that this is the last factor I will contribute until the tables are extended. All other composites within reach of my CPU resources have been grabbed by either Bruce or Raman. Allow me to say that I strongly object to Raman having grabbed so many assignments at once. I view his grabbing of 8 numbers at once to be very selfish. Bruce took the only other number, 2,1870M that was within reach for me. But that is only ONE number. I never reserved more than one number if I couldn't finish them within about 3 months. I'll probably go back to working on the Fibonacci/Lucas numbers. Bob 
20101106, 04:09  #2  
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
2351_{8} Posts 
Quote:
expected to complete during this month. That will leave out only five. Most of sieving activity is extremely done is parallel. It is only that Linear Algebra, which took up with longer time (I plan to make use of that MPI interface for all my future numbers). 2,1910M 2,2226L 2,2238L are in Linear Algebra, all these three numbers are supposed to complete up within another 20 days or so. 2,2334M is sieving; I plan to sieve with 2,985 next. Why? These numbers are certainly out of reach for your resources, as you had said before itself. Didn't I discuss with what you want to do so with, before itself? I am not responsible if some one else takes up with numbers that are within your reach, such as 2,1870M 5,785M or that 5,815L. By the way, this will be the last set of numbers, that I will be able to contribute, I will not be having resources after that period of time, at all. Adding that there are many similar sized tasks for other people in order to work upon; 6,349 5,389 3,569 11,539M 7,341+ 6,374+ 2,979+ 2,988+ 7,749L Last fiddled with by Raman on 20101106 at 04:28 

20101107, 13:55  #3 
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
1257_{10} Posts 
I am putting it in another way, it is not sieving, but linear algebra,
that takes up with much longer time. For example for 5,415+ sieving took only 30 days, but that linear algebra took upto 40 days of time. By the time linear algebra was going on for 5,415+ I finished off with that sieving activity for three numbers (2,1910M 2,2226L 2,2238L). 5,415+ sieving required upto 190M range of specialq in order to build up with that 14.3M sized matrix. 2,1910M 2,2226L 2,2238L on the whole required upto 290M range of specialq with that help of gnfslasieve4I15e lattice siever from that GGNFS suite. This longer Linear Algebra time is because of the fact that I parallelize sieving with upto 150 cores, but I make use of only 4 cores for Linear Algebra, anything more than that would be wasteful, since it hardly improves up with that Linear Algebra speed. Instead, I could make use of rest of resources for sieving with some other number. I make use of that msieve hyperthreading facility, of course, with that msieve version 1.43 for that Linear Algebra. I am sure that Linear Algebra would be much faster enough upon a core i7 processor, but all processors that I make use of are only Core 2 Duo, Core 2 Quad processors. For future jobs, I will make use of that msieve MPI interface for that linear algebra, but for current linear algebra jobs, it is not needed up. The duration of linear algebra will strongly depend upon that utilization of MPI interface, but I would hope that it would be certainly be much faster enough than mere hyperthreading upon four cores. That 4 core hyperthreading with msieve v1.43 requires upto 19 days for a 10M sized matrix, 38 days for that of a 14.14M matrix, expecting around 76 days for that of a matrix with 20M dimensions. I wish to say that I reserve numbers depending only upon my resources. There have been instances when 2 or 3 linear algebra jobs have run in parallel, each utilizing only upto 4 cores. Suppose that each sieving takes only upto 20 days, every 20 days I have to reserve one number, instead I could do so with some collectively. Please understand that it is a rather big enough pipelining for that tasks of factoring with numbers. The numbers that I work upon, quartics, had not been in demand before. I see that the low hanging fruits, numbers, in 2010 (SNFS sextic difficulty < 270, SNFS quartic difficulty < 230, that for GNFS difficulty < 180) have been exhaustively reserved and then factored. Is it due to my Wiki tables? Let me add that I consulted with you and then left some numbers for you. It is someone else who actually picked up all those other numbers. Please don't blame me. I decided to do so with that smallerbutneeded number 2,1910M at first, before 2,2226L 2,2238L, that's why it takes up with much longer time for Linear Algebra. If it was so that other way, then certainly that 2,2226L 2,2238L would have been finished off by now itself. It is a 13.6M matrix for that number 2,1910M, by using SNFS quartic of difficulty size 229, of course. Finally, I wish to say that the quartics are not as hard as you expect. For example, for 2,2334M (snfs quartic difficulty = 234 digits) according to that number size, for sieving, I would anticipate around 220M range of rational side specialq. This can be done so within about 30 days; then expect about 16M matrix for that linear algebra (just a prediction, simply). Similar to that of a sextic 256. Sieving for that number 2,2334M is being halfway done up through, within around 20 days, I should rather start up with my next number for that sieving exercise. Wouldn't you mind to receive a new factor with a gap for every 20 days (2,1910M by midNovember, 2,2226L 2,2238L by late November or early December, rather that way)... Why is it that you actually attack up with only about my current set of reservations, but not previous cases at all? I have been rather reserving up with about 4 or 5 numbers everytime, since the start of this year 2010. These current five numbers are my ultimate targets, thus they will be my last set of reservations. Why should you get so serious about all these things? After all, this is actually being done only for fun. It is an exciting hobby, this factoring numbers have provided me up with lot of fun during past 3 years. It is neither being any type of commercial work, as well, nor any sort of academic project with some deadline, that is to be published, that kind it is up so, thus is Last fiddled with by Raman on 20101107 at 14:43 
20101107, 14:30  #4  
Jun 2005
lehigh.edu
10000000000_{2} Posts 
Quote:
down exactly which numbers Bob is referring to with "take the rest". Quote:
idle here; 5,785M and 5,815L were reserved long after your seven current numbers, and already completed; and 2,1870M is sieving now, with the orphaned 2,881+ filtering. We spent quite some time on poly selection for 3,610+ c176, which is Serge's ecm cofactor; and 5,397+ c270 is in sims, and out of the present range of discussion (by difficulty). The two numbers you didn't mention are 10,590M and one of the five 2LM's? Looks like 2,1930M c201. I wondered whether either of these are in Bob's range, if his sievers are going idle, while you're not considering starting them for a while (with "2,2334M is sieving" and "2,985 next."). Just wondering; since you went out of your way to mention three of our numbers, while Bob mentions just the one that's still unfactored. Bruce Postscript, upon arrival if Raman's second reply: Now what's intended by "all those other numbers"? I've accounted for our four current reservations; only the one Bob mentions relevant here  the issue, as I understand is _reservations_ not completed factorizations. Last fiddled with by bdodson on 20101107 at 14:41 

20101108, 13:32  #5  
"Bob Silverman"
Nov 2003
North of Boston
2^{2}·1,877 Posts 
Quote:
I could probably do 2,1930M. (45 months of sieving; I have about 18 to 24 machines half time and 3 machines full time; availability varies). 10,590M is probably just out of range. (68 months of sieving; yech) I anticipate that I will finish 2,1870L sometime in early/mid January. I may instead turn to doing ECM work on the base 2+ numbers and the 2LM numbers. I have some questions: What limits have you run to date on these numbers? How long does it take to run step 1 to (say) 260M for one curve on a number in the 250 to 300 digit range? I can only run most of my machines at night. (6PM to 6AM). If step 1 fails to finish, is there any way to get the code to emit the latest value for N * p, and then restart step 1 from that point? i.e. do step 1 in stages? I know one can emit Np when step 1 finishes, so one can do step 2 separately. My NFS siever is set up so that it runs for a specified time, then exits. I doubt whether GMPECM can do the same thing, although as we push to higher limits, it might be a nice feature to add to the code. Would you recommend using the GIMPS code for step 1, and the GMPECM code for step 2? Can step 2 be done in stages? It would need to dump its internal state and this would be a LOT of data to save to the disk. How far can one take step 2 on a 250digit number in 12 hours? Would 3,610+ be faster with SNFS? It is close. 

20101108, 14:46  #6  
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
2×7^{3}×17 Posts 
Quote:
ECMGMP lets you choose a range of B2 values, minB2 and maxB2 (run ecm h for details). How to partition a given B2 range into multiple stages is left as an exercise. I can't answer your other questions of the top of my head, sorry. Paul 

20101108, 14:55  #7  
"Bob Silverman"
Nov 2003
North of Boston
2^{2}·1,877 Posts 
Quote:


20101108, 18:26  #8 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
10059_{10} Posts 
It is fairly close but the gnfs poly is sieving faster. (And it was a good continued test for Teslas. Two similar c176, but different outcomes.) Also it is quite possible that the fastest way would be ECM, but I believe that this time Bruce added ECM curves to it, as well as selected a good gnfs poly. This one is sitting in the end of the queue; no rush on it (will become smallerbutwanted after page 119); there's time to see it accidentally cracked by ECMNET.
I volunteer to do algebra for 1870L, for 1930M, or both, if you would like. (reformatting etc is no problem.) 
20101108, 18:49  #9  
"Bob Silverman"
Nov 2003
North of Boston
2^{2}·1,877 Posts 
Quote:
matrix bigger than about 8.5M rows. 2,1870L is "about" 20% sieved. I say "about" because I am uncertain as to how many total relations will be needed with a rational LP bound of 31 bits. 

20101109, 19:00  #10  
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
10011101001_{2} Posts 
Quote:
2,1870M, which I think you deliberately reserved, even after Mr. Silverman posting several times that "I plan to do so with 2,1870M next". Plus that easier reservations by NFS@Home, probably that GNFS numbers < difficulty 180 digits, that size can be barely done by that people with much smaller resources itself. I don't know whose fault it is, may be that you reserved that number after Mr. Silverman posted that he may not be able to do so with that number 2,1870L? That means that Mr. Silverman 'lost' his number, rather? I would rather suggest that he could have reserved up with that number (candidate 2,1870M) earlier on, before itself... No, that the numbers in the Linear Algebra stage cannot be counted with reservations again, since it is remotely running upon 4 cores, it does not affect with rest of sieving, either, it can't be made any more faster as well. Quote:
If in case that this number is under demand, I will do it before 2,985, which I rather thought that I would do so with immediately after 2,985 (at third place, after that two numbers 2,2334M 2,985). It is not fair to ask me to release a number after I have prepared for that number. You said that you cannot use larger factor bases even with 2,1870L, that means that 2,1930M will take up with even longer time. Would you mind considering with either of that two numbers 11,539M, or 3,605+? 3,605+ is within that main Cunningham tables right now, if in case that it survives up with that optimal amount of ECM activity? This number, had been from extended Cunningham tables, I don't mind if in any case that it takes up with much longer time, rather. In my opinion, that this is rather an ideal number in order to do so with all your larger quartics "code tests". Without complaining further from now onwards, I would rather say that not to lose up with this number atleast Last fiddled with by Raman on 20101109 at 19:23 

20101109, 20:00  #11  
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
3×7×479 Posts 
Quote:
Plus, realistically, there are no numbers left currently for anyone except NFS@Home. The 3+/ extensions will give work to everyone, in every weight category  and it could happen around turn of the year possibly. The ECMNET has weeded many easy factors already (there's a steady flow of low p50s), and even double submissions started to happen, which hints at a certain saturation of ECM efforts. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Reservations  ET_  Operazione Doppi Mersennes  540  20230205 23:36 
Reservations  kar_bon  Riesel Prime Data Collecting (k*2^n1)  129  20160905 09:23 
Where are my reservations  fivemack  PrimeNet  3  20160208 17:58 
Reservations?  R.D. Silverman  NFS@Home  15  20151129 23:18 
45M Reservations  paulunderwood  3*2^n1 Search  15  20080608 03:29 