mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > NFSNET Discussion

 
 
Thread Tools
Old 2007-10-05, 19:42   #23
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

643810 Posts
Default

Good point about the cube root; I hadn't thought what was happening on the linear side, and just remembered that the quadratic for x^3-1 can be turned trivially into a sextic. Sorry to have raised your hopes about 2,1914M.

Going through, 2,1962M is actually managing to use the factor nine by working with the factorisation of 2^18*x^36+1 ... I didn't expect that to be possible. Cool.

[in the past you've occasionally posted things here suggesting that you don't have access to computational algebra; I'm doing all of this with pari/gp, which is conveniently free software, though I'm sure you've got hold of that yourself]
fivemack is offline  
Old 2007-10-08, 09:12   #24
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by bdodson View Post
That was quick; 6, 284+ wins. (Unless ECMNET gets
a factor within the week.) Suppose 6,292+ would serve
as a replacement for a poll. If that one wins next, the
768-bit list would be down to four last base-7's. -bd
No; no factor in my 3rd & last t50 on 6,284+. But 6,292+ isn't
going to do as a replacement:

p60 = 151634244917416206035101114864937647283016448179107389644473

with prime cofactor. One more number to go to finish the 3rd
t50 on the last of the c190-c233's in difficulty 220-229. This one
was More wanted, 6th on the top10. -Bruce
bdodson is offline  
Old 2007-10-09, 21:44   #25
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22×5×373 Posts
Default 2,1962M

Here is 2,1962M C173 = p54.p119

p54 = 561070572288256277136602810062157316007570131157641589
p119 = 52548716528304902570734222019216090488579184876231505008640646786326028262229620519239651894875787945135414973991400093

2,1630M is in progress. It will take a while since a quartic is sub-optimal.
R.D. Silverman is offline  
Old 2007-10-11, 11:52   #26
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by bdodson View Post
Substantial progress since the last time we looked; finishing 2,1582L c162
will move 2,1598M C160 up into a fifth hole; with the 12 remaining all readily
visible on Sam's page. Base-2's are fine with me; the only concern
being that if there aren't enough sievers we'd drift up towards six months
of sieving, which is a long time to wait for someone just considering
joining. -bd

Kleinjung finished 2,799- C188 = p56.p133
R.D. Silverman is offline  
Old 2007-10-11, 23:50   #27
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

2×3×29×37 Posts
Default

This may be an unnecessarily contentious post, but do you consider Kleinjung's result an ECM miss? I think it's marginal; a curve at the 55-digit level takes about 30 minutes on hardware on which I'd expect 240-digit SNFS to take around 20,000 hours, and 40,000 curves would probably have picked up a p56, but I'm not sure that ECM on that number is the first use to which I'd have put 20,000 CPU-hours.
fivemack is offline  
Old 2007-10-12, 02:23   #28
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by fivemack View Post
This may be an unnecessarily contentious post, but do you consider Kleinjung's result an ECM miss? I think it's marginal; a curve at the 55-digit level takes about 30 minutes on hardware on which I'd expect 240-digit SNFS to take around 20,000 hours, and 40,000 curves would probably have picked up a p56, but I'm not sure that ECM on that number is the first use to which I'd have put 20,000 CPU-hours.
It seems to me that, by what I understand as the conventional use,
what you're discussing here is a hypothetical. To be an ecm miss, where
ecm didn't do what we expected, you'd have had to actually run the
20,000 cpu hours. Optimal use of computing resources for ecm also
has a built-in failure rate. If 40,000 curves with B1=110M is an optimal
test for a known p55, we're supposed to stop at probabilty 1-1/e of
finding the factor, allowing 1/e (a bit over 30%) of a chance of not
getting that specific p55; re-estimating the next most likely factor size,
presumably p60, and switch B1 to look for p60's. So if there were 10
sieving candidates with a p55, we're supposed to find 7 of them, and leave
the other 3. So at/near the bleeding edge of performance ecm, no
single prime factor found by sieving instead of ecm is ecm's fault. So
as I understand the issue, the curves have to have actually be run,
and for a single instance to qualify as an ecm miss, the factor should
be notably below the level to which ecm was run.

In this case, Kleinjung's reservation was way back in late June (it's
on the July 1 "who's doing what"), so there were only 2*t50 bdodson
curves run; perhaps a somewhat larger (2+epsilon)*t50 since this
was a base-2 number. For me to say that ecm (rather than it's
operators, deciding what numbers to feed into ecm) had missed a
specific factor of a number run to 2*t50, I'd be thinking something
like p47-p48. Peter has a term of "removing" an ecm factor size, rather
than "finding", for which one runs twice the number of curves "expected";
lowering the probablilty of leaving a factor of that size to 1/(e^2). So if
you were having hesitations about the 20,000 cpuhours, I'm expecting
that if it were a question of 40,000 that you'd much rather have spent
the time sieving, for which we'd be making certain progress towards
the factorization.

Taking the two recent small factors together, Bob's p54 and Thorsten's
p56, they seem entirely consistent with the Silverman-Wagstaff analysis --
if an ecm t50 has failed to find a factor, the next most likely factor size
to look for is p55. And we're still a long way from being willing to run t55's
on numbers of small snfs difficulty. Actually, I find these factor sizes
somewhat encouraging: if/when almost all of the gnfs/snfs smallest factor
sizes are above p80, ecm will no longer be an attractive method.
-Bruce

PS - In the pdf JasonP points to on the kilobit snfs, the authors are
grumbling that if they'd known that there was a p80 they might have
run some more ecm. Sounds like we're within a generation or two of
the first p80 referred to as an ecm miss! (That's cpu/memory generations;
sooner than one might expect.)
bdodson is offline  
Old 2007-10-12, 15:13   #29
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by bdodson View Post

<snip>

And we're still a long way from being willing to run t55's
on numbers of small snfs difficulty. Actually, I find these factor sizes
somewhat encouraging: if/when almost all of the gnfs/snfs smallest factor
sizes are above p80, ecm will no longer be an attractive method.
I would not call M799 "small snfs difficulty". Otherwise, we are in total
agreement.

BTW, I don't think the p56 is even close to being an ECM miss.

The p51 from 11,251+ might be.
R.D. Silverman is offline  
Old 2007-10-13, 14:15   #30
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
I would not call M799 "small snfs difficulty". Otherwise, we are in total agreement.
...
The p51 from 11,251+ might be.
Thanks for the wake-up call! On M799, difficulty 240.52, I have
these in difficulty 230-249. Most of the grid cpu's are in 250-361
both the larger memory P4s (B1=110M) and the core2s (B1=260M),
split c211-c233 and c190-c210. The Opterons just finished a 3rd
t50 on c190-c233 with difficulty 220-229.99; and are starting in
on 230-239.99. The new dual xeon quads are warming up on
240-249.99, also c190-c233. So far and away, most of my curves
are going on numbers with difficulty above 241!

I've been referring to c147-c154's as "soon to be smaller-but-needed"
for a year-or-so already; but those are shrinking steadily, leaving
c155-c169, and even c170-c189 as "smaller". After effects, perhaps,
of my extended run in c251-c365. If we finish the ones with (snfs)
difficulty below 220 for which there's a quintic or sextic, these
new-smaller c155-c179's will shrink towards degree 4's and/or gnfs's.
-Bruce
bdodson is offline  
Old 2007-10-18, 15:55   #31
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default Smokin' !!

Quote:
Originally Posted by R.D. Silverman View Post
... I had promised Dick Lehmer that I would push to finish the base 2 tables ...
Some time ago, NFSNET had asked for suggestions for some "easier"
numbers. I had suggested the 768-bit list as an alternative to the
harder base 2 numbers.

In fact, if NFSNET wants to make a effort to finish the base 2 tables
through 800 bits, ...
Looks like you're getting your wish, nfsnet seems to have completed
their run on the 768-bit list with 6,284+. Perhaps someone else will
pick up the remaining ones (the last one has finished 3*t50).

As I recall the NFSNET charter, the objective isn't so much cleaning
up the numbers within a comfortable range, but to push on to larger
benchmarks. So as Xilman observes, 2,779+.C212 is difficulty 235,
the largest we've done in a while (Lehigh seems to have been the
last to switch); and the winner of the next number "vote" was
10,239-.C228, difficulty 239. Looks like Thorsten was headed in the
right direction, with difficulties in the 240's. -Bruce
bdodson is offline  
Old 2007-10-31, 02:02   #32
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

32×112 Posts
Default

This evening, Greg reported to me:
Quote:
5,323- finished successfully. The factors are
prp54 factor: 824025642333621472612253607491152025643258690550015151
prp61 factor: 4520075300365525822415973296109200878340148487916084028121991
prp72 factor: 132981150324062454692451481044833258173562011479994362058454095433879531
This factoring utilized a combination of the CWI suite and post processing from msieve.

87.9M unique relations were collected by line sieving.

I then processed the data removing the singletons and cliques to the point that there were 3.4M excess for ideals > 10M.
Those remaining 26M relations were sent to California where Greg used msieve to further reduce the data to a 6.4M matrix.
The Block Lanczos phase ran from Thu Oct 25 14:33:54 2007 to Tue Oct 30 03:36:20 2007.
We would like to thank Greg's colleague who gave up his machine not only for the weekend, but also all day Friday and Monday to run the matrix solution.

We continue sieving for 2,779+.C212 and should switch to 10,239-.C228 early next month.
Wacky is offline  
Old 2007-11-01, 21:03   #33
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

20008 Posts
Default

Quote:
Originally Posted by Wacky View Post
We continue sieving for 2,779+.C212 and should switch to 10,239-.C228 early next month.
5,323- was selected at the last moment; in fact, the project file
is in the same email as confirmation of the selection. So no extra ecm.
I see a report of 2*t50, and the selection was before Bob's "(near) miss"
of a p53, which was when I started que-ing 3rd t50's. I did a bit
better with 6, 284+ with a last minute 3rd t50 (thanks to an early
"whos doing what?" from Sam, which had the nfsnet reservation). But
5,323- was earlier, and a 2*t50 effort is less than half of what's
needed for the p54; ecm didn't get a fair shot.

The current 779+ did get a 3rd t50; and the base-10 next number
got 4*t50. With current resources we wouldn't drop back to difficulty
below 230. Seems like M787 would be about the best we could do
in the mid-230 range, at the top of the most wanted list. We could
apply the same parameters to pick up 2,787+ at the same difficulty.
Or is there something in difficulty 240-249.99 that would be a better,
more difficult challenge? Setting the number early would give me a
better chance to make sure that the 3rd t50's been done, and get a
better chance at any p54-p69's by continuing on toward t55. I can
try guessing a likely range or ranges, but a definite early selection
would be best. -Bruce
bdodson is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Status Primeinator Operation Billion Digits 5 2011-12-06 02:35
62 bit status 1997rj7 Lone Mersenne Hunters 27 2008-09-29 13:52
OBD Status Uncwilly Operation Billion Digits 22 2005-10-25 14:05
1-2M LLR status paulunderwood 3*2^n-1 Search 2 2005-03-13 17:03
Status of 26.0M - 26.5M 1997rj7 Lone Mersenne Hunters 25 2004-06-18 16:46

All times are UTC. The time now is 15:40.


Sun Oct 17 15:40:26 UTC 2021 up 86 days, 10:09, 1 user, load averages: 1.02, 1.08, 1.14

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.