20170726, 04:45  #1 
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
1001010101000_{2} Posts 
Thinking out loud about getting under 20M unfactored exponents
http://www.mersenne.ca/status/tf/0/0/1/0
Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result. Similarly if each 10M range has less than 200K unfactored... or each 1M range has less than 20K unfactored... or each 100K range has less than 2,000 unfactored. So I did some Excel ciphering looking at:  how many more factors are required in each range  how many exponents need to be TF'd at the current bit level to get there (could require several bit levels to complete)  how many GhzDays each assignment would take.  I stopped at the 59M range thinking current GPU TF bit levels will factor adequately (most of the time) to get below my limits of interest here. I did this for the 10M, 1M and 100K ranges. Then I added it all up and came up with very roughly 250M GhzDays of TF with some ranges requiring up to 10 more bit levels of TF. WOW. In perspective, my 1,000 per day GPUs would take 250K days: 685 years. Oh dear; that's way more than I had expected. Note: I only considered TF. I understand that in some (many?) cases ECM (on lower exponents) and P1 could find factors much quicker. In either case it looks like this will be a very far off milestone. Code:
=== Process where current B1=B2 first; then lowest current B1&B2. === Even when B2>B1 the current bounds are mostly quite low and factors are plentiful. === Judger is still systematically TF'ing to 75 bits where 20+ to go. === I'd suggest not doing P1 on any ranges lower than TF75 beyond the first section below. Range ToGo B1=B2 TFBits Owns === Any in this group will clear with relatively low P1 bounds. === Something like 1M/30M should be more than enough. 24.2 6 616 75 28.1 7 679 74 Kruoli 25.7 8 739 75 Chris 23.4 7 12 75 Chris 25.9 11 799 75 29.0 14 718 75 24.4 15 577 75 27.8 15 818 75 26.2 23 792 75 26.9 25 793 75 28.0 28 538 75 26.8 31 848 75 === Starting about here consider about 1.5M/45M though 1M/30M might do it. 24.7 39 700 75 26.4 43 812 75 === There are getting a little more dicey. === I'd be tempted to wait for Judger to TF75 === But if you are ambitious consider 2M/60M as minimal bounds; even more near the end of this list. 27.2 61 835 75 26.5 62 800 75 27.0 72 848 75 25.3 77 766 75 29.8 91 796 75 Last fiddled with by petrw1 on 20211025 at 14:14 Reason: Keeping the "Help Wanted" updates here on post 1 
20170726, 07:48  #2 
Oct 2015
2×7×19 Posts 
It just means we need more GPUs.
For instance if we can get 1000 high end GPUs on it, we could get it done in under a year based on your maths. We just need to find an organisation with a spare 800K USD who had a sudden urge to generously donate GPUs to anyone that requests one. Last fiddled with by 0PolarBearsHere on 20170726 at 07:49 
20170726, 10:43  #3 
"Victor de Hollander"
Aug 2011
the Netherlands
2^{3}×3×7^{2} Posts 
And what would this accomplisch?

20170726, 16:03  #4 
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
2^{3}·3·199 Posts 

20170726, 19:48  #5 
"Jacob"
Sep 2006
Brussels, Belgium
5·349 Posts 
If your best tool is a factoring machine you view everything as as entities to be factored. :)
Jacob 
20170726, 20:03  #6  
If I May
"Chris Halsall"
Sep 2002
Barbados
26F2_{16} Posts 
Quote:
In addition to the Philips, are you familiar with the Roberson? The hex? I have actually watched people slam screws into wood using a hammer, because the Philips screws' heads were stripped with a screw driver which was too small. I actually learned some new words (containing many symbols, including (!*%$@***!!!)) from men who should have understood the simplicity of the situation. For what that is worth.... Last fiddled with by chalsall on 20170726 at 20:07 

20170726, 20:35  #7 
Aug 2006
3×1,993 Posts 
I'm not sure what the OP has in mind, but I know that full factorizations of small Mersenne numbers are very useful. For example, they greatly speed up the nonsqrtsmooth part (which dominates computationally) of Feitsma's algorithm for listing 2pseudoprimes. I've heard interest in extending his work beyond 2^64 so this isn't just academic.
As for finding individual factors, I don't know... I guess it just gives simpler/shorter certificates of compositeness. 
20170726, 23:01  #8 
Nov 2008
501_{10} Posts 

20170726, 23:25  #9 
"Forget I exist"
Jul 2009
Dumbassville
2^{6}×131 Posts 

20170726, 23:52  #10 
If I May
"Chris Halsall"
Sep 2002
Barbados
26F2_{16} Posts 

20170727, 04:40  #11 
Random Account
Aug 2009
3×661 Posts 
I believe just about everyone here recognizes the image I have attached. This ends at 2^{80}. I suppose some here could comfortably TF to this level in a reasonable period of time. Of course, I do not know what most would consider "reasonable."
The last I heard, a computer "generation" was in the area of 18 months. It is probably less now. It would take many generations of tech growth to get to the level the OP was writing about. Point: Let us do now what needs to be done now, and not think about the future. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Thinking of Joining GPU to 72  jschwar313  GPU to 72  3  20160131 00:50 
Thinking about lasieve5  Batalov  Factoring  6  20111227 22:40 
Thinking about buying a panda  jasong  jasong  1  20081111 09:43 
Loud thinking on irregular primes  devarajkandadai  Math  4  20070725 03:01 
Question on unfactored numbers...  WraithX  GMPECM  1  20060319 22:16 