mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   Thinking out loud about getting under 20M unfactored exponents (https://www.mersenneforum.org/showthread.php?t=22476)

petrw1 2017-07-26 04:45

Thinking out loud about getting under 20M unfactored exponents
 
[url]http://www.mersenne.ca/status/tf/0/0/1/0[/url]

Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result.
Similarly if each 10M range has less than 200K unfactored...
or each 1M range has less than 20K unfactored...
or each 100K range has less than 2,000 unfactored.

So I did some Excel ciphering looking at:
- how many more factors are required in each range
- how many exponents need to be TF'd at the current bit level to get there (could require several bit levels to complete)
- how many GhzDays each assignment would take.
- I stopped at the 59M range thinking current GPU TF bit levels will factor adequately (most of the time) to get below my limits of interest here.

I did this for the 10M, 1M and 100K ranges.

Then I added it all up and came up with very roughly 250M GhzDays of TF with some ranges requiring up to 10 more bit levels of TF. WOW.

In perspective, my 1,000 per day GPUs would take 250K days: 685 years.

Oh dear; that's way more than I had expected.

Note: I only considered TF.
I understand that in some (many?) cases ECM (on lower exponents) and P-1 could find factors much quicker.

In either case it looks like this will be a very far off milestone.

[CODE]
=== Process where current B1=B2 first; then lowest current B1&B2.
=== Even when B2>B1 the current bounds are mostly quite low and factors are plentiful.
=== I'm only listing ranges here that have at least 74 bits TF
=== There are 10 more ranges waiting for TF74 before I will list them here: 20.1 to 21.2

Range ToGo B1=B2 TFBits Owns

=== Any in this group will clear with relatively low P1 bounds.
=== Something like 1M/30M should be more than enough.
22.4 3 0 75 Kruoli
27.8 7 818 75 Chris
21.3 2 302 74 Kruoli
21.6 14 0 74 Chris
20.3 19 437 73* Luminescence
20.5 19 57 73*

26.2 23 792 75
26.9 25 793 75
28.0 8 538 75 petrw1
26.8 25 848 75 petrw1

=== Starting about here consider about 1.5M/45M though 1M/30M might do it.
24.7 39 700 75 petrw1
26.4 43 812 75
21.8 46 718 74*
21.7 49 819 74*

=== There are getting a little more dicey.
=== I'd be tempted to wait for TF75*
=== But if you are ambitious consider 3M/90M as minimal bounds; even more near the end of this list.
22.3 57 0 74.9 TF to 75
26.5 62 800 75 Luminescence
27.0 72 848 75
21.9 71 333 74 TF to 75
25.3 77 766 75 Anton Repko P-1 and P+1
29.8 91 796 75 Kruoli
[/CODE]

0PolarBearsHere 2017-07-26 07:48

It just means we need more GPUs.
For instance if we can get 1000 high end GPUs on it, we could get it done in under a year based on your maths. We just need to find an organisation with a spare 800K USD who had a sudden urge to generously donate GPUs to anyone that requests one.

VictordeHolland 2017-07-26 10:43

And what would this accomplisch?

petrw1 2017-07-26 16:03

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

Absolutely nothing of consequence.
Nothing more than another milestone of interest to some.

S485122 2017-07-26 19:48

If your best tool is a factoring machine you view everything as as entities to be factored. :-)

Jacob

chalsall 2017-07-26 20:03

[QUOTE=S485122;464237]If your best tool is a factoring machine you view everything as as entities to be factored. :-)[/QUOTE]

Just to reflect Jacob... Sometimes it is worth the effort to think about what other people are thinking about...

In addition to the Philips, are you familiar with the Roberson? The hex?

I have actually watched people slam screws into wood using a hammer, because the Philips screws' heads were stripped with a screw driver which was too small.

I actually learned some new words (containing many symbols, including (!*%$@***!!!)) from men who should have understood the simplicity of the situation.

For what that is worth....

CRGreathouse 2017-07-26 20:35

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

I'm not sure what the OP has in mind, but I know that full factorizations of small Mersenne numbers are very useful. For example, they greatly speed up the [url=http://www.janfeitsma.nl/math/psp2/not-sqrt-smooth]non-sqrt-smooth part[/url] (which dominates computationally) of [url=http://www.janfeitsma.nl/math/psp2/index]Feitsma's algorithm[/url] for listing 2-pseudoprimes. I've heard interest in extending his work beyond 2^64 so this isn't just academic.

As for finding individual factors, I don't know... I guess it just gives simpler/shorter certificates of compositeness.

Gordon 2017-07-26 23:01

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

because they are there and because we can :-)

science_man_88 2017-07-26 23:25

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

if done high enough, in theory, it could deplete the candidate factors for larger mersenne numbers a bit.

chalsall 2017-07-26 23:52

[QUOTE=science_man_88;464251]if done high enough, in theory, it could deplete the candidate factors for larger mersenne numbers a bit.[/QUOTE]

Yeah... In theory....

storm5510 2017-07-27 04:40

1 Attachment(s)
I believe just about everyone here recognizes the image I have attached. This ends at 2[SUP]80[/SUP]. I suppose some here could comfortably TF to this level in a reasonable period of time. Of course, I do not know what most would consider "reasonable."

The last I heard, a computer "generation" was in the area of 18 months. It is probably less now. It would take many generations of tech growth to get to the level the OP was writing about.

[U]Point[/U]: Let us do now what needs to be done now, and not think about the future.

:smile:


All times are UTC. The time now is 15:52.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.