mersenneforum.org > Data Let's Optimize P-1 for low exponents. TL;DR in post #1. More in posts 60 and 61.
 Register FAQ Search Today's Posts Mark Forums Read

2022-01-11, 23:32   #34
Mark Rose

"/X\(‘-‘)/X\"
Jan 2013

23·32·41 Posts

Quote:
 Originally Posted by petrw1 (unless you give it a LLLLOTTTT of RAM)
Which is how much? I'd say 32 GB is pretty standard in a system these days. You mean like 256 GB? Or?

2022-01-11, 23:57   #35
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

5,179 Posts

Quote:
 Originally Posted by Mark Rose Which is how much? I'd say 32 GB is pretty standard in a system these days. You mean like 256 GB? Or?
You'd have to ask those who tried larger exponents with v30.8.
My guess is 32 is a minimum up there.

 2022-01-12, 02:55 #36 Mark Rose     "/X\(‘-‘)/X\" Jan 2013 23·32·41 Posts I haven't played with it yet. I may get on the p-1 fun, now that LLDC will complete.
2022-01-12, 03:34   #37
LaurV
Romulan Interpreter

"name field"
Jun 2011
Thailand

7×1,423 Posts

Quote:
 Originally Posted by VBCurtis Better to do it once, do it right- go deep enough on P-1 now that nobody would "ever" want to re-do it.
Quote:
 Originally Posted by VBCurtis I like this idea best! [snip] Also, LaurV likes it. The taste police have spoken!
Sure. As I said long time ago, first, and most important, you reduce the risk of finishing the range just to find out that you need two or three more factors and having to redo the work again with larger bounds, wasting a lot of time and resources. Then, you have A LOT more chances to find large, RECORD, factors. Then, in spite of the fact that each test takes longer, you will test fewer candidates, because you have more chances to find a factor at each test, so at the end, you will find the number of needed factors in about the same time like running P-1 with lower bounds, AND you will leave 30-50% of the candidates untested, which are "low hanging fruit" for the people with slow computers, or who'll wants to test new tools, polish their horns, or whatever, in the future. Look to what I have done in 11.xM and I am just doing in 6.1M.

On the other hand, how can I do to jump 34 positions up in P-1 lifetime top, without reporting any P-1 work?

Last fiddled with by LaurV on 2022-01-12 at 03:38

2022-01-12, 05:06   #38
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

5,179 Posts

Quote:
 Originally Posted by Mark Rose I haven't played with it yet. I may get on the p-1 fun, now that LLDC will complete.
You're very welcome...be sure you upgrade to v30.8 if you want REALLY fast P-1

2022-01-15, 01:07   #39
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

5,179 Posts
Some cyphering for 1 possible scenario.

The following is completely negotiable.

If, just as an example, we choose B1=1M as the preferred minimum for exponent 20M with a RAM allocation of 16GB;
and using George's "every time you double the exponent suggested B1 drops by a factor of 2.2"
I get this formula for a recommended B1 for any exponent... I won't bet the farm I have this right but it seems to spot-check:

Code:
2.2^LOG(20,000,000/<exponent>,2)*1,000,000
That is LOG base 2.

We can compute the ratios of recommended B1 to current B1 to find the best candidates.
Then based on this comment where Tha is referring to RDS wisdom:
Quote:
 I also recall him saying that redoing P=1 only makes sense when you choose B1 >= 10 x (B1 value during the previous run on that exponent).
I don't know if this 10x still applies under our new world where B2 is hundredsxB1 instead of tensxB1.

But in any case we could start with the highest ratios and work our way down to 10x.

I did a count of how many we would be looking at with the above parameters:

Code:
Exponent<	Num >10
1000000		141
2000000		5188
3000000		285
4000000		2700
5000000		7287
6000000		8992
7000000		0
8000000		8186
9000000		9448
10000000	7327
11000000	10321
12000000	2412
13000000	1278
14000000	4294
15000000	348
16000000	0
17000000	65
18000000	511
19000000	726
20000000	11
TOTAL		69520
Means: Under 1M there are only 141 exponents with a current B1< 10% of recommended.
and 5,188 between 1M and 2M.
Interesting there are 0 between 6M and 7M...someone must have dabbled there.

I guesstimate that a reasonable PC could complete a P-1 such as these in about 1 hour.
So these almost 50,000 assignments is not a lot of work.
In fact my 5 PCs could do it in about a year.
So maybe my suggested parameters at the start could be more aggressive?
Or maybe this is just far enough
Or maybe if we extend this as requested to get all 10K ranges under 200 remaining it gets a lot bigger.
I'll calculate later how many more there are between 10M and 20M ... or maybe higher.
=== Ok going to 20M added 20,000 more; most of them between 10M and 11M.

And, if we go down to where recommended B1 is 5x current B1 it almost triples the number of exponents to process
Thoughts?

Last fiddled with by petrw1 on 2022-01-15 at 06:14 Reason: Charted to 20M ... if cutoff 5 times

2022-01-15, 03:09   #40
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

2·7·563 Posts

Quote:
 Originally Posted by petrw1 Thoughts?
Please verify that 2.2 number, it was just a rough guess.

Maybe a less ambitious project is a good idea this time round. It won't lock you into a big commitment should something more interesting catch your eye. You can always adjust the projects scope by using 20% instead of 10%, 30M instead of 20M, etc. Anyway, it's your baby -- you lead and the crowd will follow :)

P.S. Thank you for making low level factoring a fun past-time for the last few years. I've enjoyed watching your progress and reading every single post since you started the effort.

2022-01-15, 04:50   #41
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

517910 Posts

Quote:
 Originally Posted by Prime95 Please verify that 2.2 number, it was just a rough guess.
Ooooooohhhh I had assumed it was based on some inner workings of your new code.
Ok, I'll give it some thought....but there are others more qualified for this.
If the goal is a consistent run-time (clock-time) I could do some testing.
Except-1: I don't have much time before I leave (4 days); and by the time I return (end of March) the Under 2000 project could be close to done.

If the goal is consistent GhzDays I could simply use this ;
Except-1: I can easily vary the B1 but I have no real way of knowing what to use for B2. I just know it gets relatively larger (vs. B1) as the exponent gets smaller.
Except-2: the percentages and GhzDays it reports do NOT match what I am actually seeing reported and awarded by the software.
It differs by about 15% lower. I guess if it is consistently low it is still useful to determine the appropriate parameters.

Quote:
 Maybe a less ambitious project is a good idea this time round. It won't lock you into a big commitment should something more interesting catch your eye. You can always adjust the projects scope by using 20% instead of 10%, 30M instead of 20M, etc. Anyway, it's your baby -- you lead and the crowd will follow :)
Well, I was just responding to requests from others and gathering the posts.
I hadn't gone as far as assuming paternity.
If someone else wants to lead I will follow ... but if I am acclaimed it will be fun coordinating another project.

Quote:
 P.S. Thank you for making low level factoring a fun past-time for the last few years. I've enjoyed watching your progress and reading every single post since you started the effort.

Aww you are too kind; it was a slow start - I actually never did expect it would finish and I had more comments like: "And what would this accomplish?" than followers.
However for the last half a year of so people have been crawling over each other to get involved (well almost).

Last fiddled with by petrw1 on 2022-01-15 at 05:07

 2022-01-15, 05:43 #42 petrw1 1976 Toyota Corona years forever!     "Wayne" Nov 2006 Saskatchewan, Canada 5,179 Posts We also need some direction on how RAM affects this project. To get comparable results there is some agreement that we give a rule of thumb for how to adjust the recommended B1 based on the RAM. Comparable as in: Success Rate or GhzDays; run time will definitely be longer if you have less RAM For example, again assuming the guidelines are based on 16GB RAM (not formally decided yet): These are very roughly the numbers I am seeing but they are for different PC's so they may NOT be reliable. If you have 12GB RAM then increase the recommended B1 by 10%. ... 8GB RAM ... 30% ... 4GB RAM ... 75% If you have more than 16GB then you could similarly decrease the recommended B1 Of course in the end anything "recommended" really should be "suggested"; everyone is free to make their own choices. We would never be upset if you chose larger bounds but would hope, for the sake of the ultimate goal of the project, that you would resist reducing the bounds. Last fiddled with by petrw1 on 2022-01-15 at 06:16
 2022-01-15, 11:30 #43 axn     Jun 2003 23·233 Posts I suspect that a good rule of thumb for B1 adjustment would be sqrt( ref RAM / allocated RAM) where ref RAM should be a good "high end" RAM allocation like 24GB (out of a typical 32GB installed RAM). Than means, with a 6GB allocation, you'd have to double the suggested B1, and with 96GB allocation, you can halve it. This will need some validation by running some sample exponents to the suggested B1's, see what P95 computes as optimal B2, and what the corresponding probabilities are. As for calculating the B1 itself, it might be better to use FFT size rather than exponent, as that is more representative of runtime. But either should be "good enough". There is some FFT data (including max exponent & reference timings) available in P95 source, but I'm not very sure what each table means (there are lots of different tables). The reference timings could be used to scale the B1.
 2022-01-15, 12:09 #44 firejuggler     "Vincent" Apr 2010 Over the rainbow 2×3×11×43 Posts A question I have : is it worthy to rerun a P-1 with the same bound if you find a factor in stage 1? (adding the factor found, obviously) Last fiddled with by firejuggler on 2022-01-15 at 12:44

 Similar Threads Thread Thread Starter Forum Replies Last Post Ilya Gazman Factoring 6 2020-08-26 22:03 kladner Lounge 3 2018-10-01 20:32 gd_barnes No Prime Left Behind 6 2008-02-29 01:09 jasong Marin's Mersenne-aries 7 2006-12-22 21:59 GP2 Software 10 2003-12-09 20:41

All times are UTC. The time now is 15:41.

Mon May 16 15:41:22 UTC 2022 up 32 days, 13:42, 2 users, load averages: 1.03, 1.34, 1.30