![]() |
![]() |
#34 |
"/X\(‘-‘)/X\"
Jan 2013
23·32·41 Posts |
![]() |
![]() |
![]() |
![]() |
#35 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]() |
![]() |
![]() |
![]() |
#36 |
"/X\(‘-‘)/X\"
Jan 2013
23·32·41 Posts |
![]()
I haven't played with it yet. I may get on the p-1 fun, now that LLDC will complete.
|
![]() |
![]() |
![]() |
#37 | ||
Romulan Interpreter
"name field"
Jun 2011
Thailand
7×1,423 Posts |
![]() Quote:
Quote:
On the other hand, how can I do to jump 34 positions up in P-1 lifetime top, without reporting any P-1 work? Last fiddled with by LaurV on 2022-01-12 at 03:38 |
||
![]() |
![]() |
![]() |
#38 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]() |
![]() |
![]() |
![]() |
#39 | |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]()
The following is completely negotiable.
If, just as an example, we choose B1=1M as the preferred minimum for exponent 20M with a RAM allocation of 16GB; and using George's "every time you double the exponent suggested B1 drops by a factor of 2.2" I get this formula for a recommended B1 for any exponent... I won't bet the farm I have this right but it seems to spot-check: Code:
2.2^LOG(20,000,000/<exponent>,2)*1,000,000 We can compute the ratios of recommended B1 to current B1 to find the best candidates. Then based on this comment where Tha is referring to RDS wisdom: Quote:
But in any case we could start with the highest ratios and work our way down to 10x. I did a count of how many we would be looking at with the above parameters: Code:
Exponent< Num >10 1000000 141 2000000 5188 3000000 285 4000000 2700 5000000 7287 6000000 8992 7000000 0 8000000 8186 9000000 9448 10000000 7327 11000000 10321 12000000 2412 13000000 1278 14000000 4294 15000000 348 16000000 0 17000000 65 18000000 511 19000000 726 20000000 11 TOTAL 69520 and 5,188 between 1M and 2M. Interesting there are 0 between 6M and 7M...someone must have dabbled there. I guesstimate that a reasonable PC could complete a P-1 such as these in about 1 hour. So these almost 50,000 assignments is not a lot of work. In fact my 5 PCs could do it in about a year. So maybe my suggested parameters at the start could be more aggressive? Or maybe this is just far enough Or maybe if we extend this as requested to get all 10K ranges under 200 remaining it gets a lot bigger. I'll calculate later how many more there are between 10M and 20M ... or maybe higher. === Ok going to 20M added 20,000 more; most of them between 10M and 11M. And, if we go down to where recommended B1 is 5x current B1 it almost triples the number of exponents to process Thoughts? Last fiddled with by petrw1 on 2022-01-15 at 06:14 Reason: Charted to 20M ... if cutoff 5 times |
|
![]() |
![]() |
![]() |
#40 |
P90 years forever!
Aug 2002
Yeehaw, FL
2·7·563 Posts |
![]()
Please verify that 2.2 number, it was just a rough guess.
Maybe a less ambitious project is a good idea this time round. It won't lock you into a big commitment should something more interesting catch your eye. You can always adjust the projects scope by using 20% instead of 10%, 30M instead of 20M, etc. Anyway, it's your baby -- you lead and the crowd will follow :) P.S. Thank you for making low level factoring a fun past-time for the last few years. I've enjoyed watching your progress and reading every single post since you started the effort. |
![]() |
![]() |
![]() |
#41 | ||
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
517910 Posts |
![]()
Ooooooohhhh I had assumed it was based on some inner workings of your new code.
Ok, I'll give it some thought....but there are others more qualified for this. If the goal is a consistent run-time (clock-time) I could do some testing. Except-1: I don't have much time before I leave (4 days); and by the time I return (end of March) the Under 2000 project could be close to done. If the goal is consistent GhzDays I could simply use this ; Except-1: I can easily vary the B1 but I have no real way of knowing what to use for B2. I just know it gets relatively larger (vs. B1) as the exponent gets smaller. Except-2: the percentages and GhzDays it reports do NOT match what I am actually seeing reported and awarded by the software. It differs by about 15% lower. I guess if it is consistently low it is still useful to determine the appropriate parameters. Quote:
I hadn't gone as far as assuming paternity. If someone else wants to lead I will follow ... but if I am acclaimed it will be fun coordinating another project. Quote:
![]() Aww you are too kind; it was a slow start - I actually never did expect it would finish and I had more comments like: "And what would this accomplish?" than followers. However for the last half a year of so people have been crawling over each other to get involved (well almost). Last fiddled with by petrw1 on 2022-01-15 at 05:07 |
||
![]() |
![]() |
![]() |
#42 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
5,179 Posts |
![]()
We also need some direction on how RAM affects this project.
To get comparable results there is some agreement that we give a rule of thumb for how to adjust the recommended B1 based on the RAM. Comparable as in: Success Rate or GhzDays; run time will definitely be longer if you have less RAM For example, again assuming the guidelines are based on 16GB RAM (not formally decided yet): These are very roughly the numbers I am seeing but they are for different PC's so they may NOT be reliable. If you have 12GB RAM then increase the recommended B1 by 10%. ... 8GB RAM ... 30% ... 4GB RAM ... 75% If you have more than 16GB then you could similarly decrease the recommended B1 Of course in the end anything "recommended" really should be "suggested"; everyone is free to make their own choices. We would never be upset if you chose larger bounds but would hope, for the sake of the ultimate goal of the project, that you would resist reducing the bounds. Last fiddled with by petrw1 on 2022-01-15 at 06:16 |
![]() |
![]() |
![]() |
#43 |
Jun 2003
23·233 Posts |
![]()
I suspect that a good rule of thumb for B1 adjustment would be sqrt( ref RAM / allocated RAM) where ref RAM should be a good "high end" RAM allocation like 24GB (out of a typical 32GB installed RAM).
Than means, with a 6GB allocation, you'd have to double the suggested B1, and with 96GB allocation, you can halve it. This will need some validation by running some sample exponents to the suggested B1's, see what P95 computes as optimal B2, and what the corresponding probabilities are. As for calculating the B1 itself, it might be better to use FFT size rather than exponent, as that is more representative of runtime. But either should be "good enough". There is some FFT data (including max exponent & reference timings) available in P95 source, but I'm not very sure what each table means (there are lots of different tables). The reference timings could be used to scale the B1. |
![]() |
![]() |
![]() |
#44 |
"Vincent"
Apr 2010
Over the rainbow
2×3×11×43 Posts |
![]()
A question I have : is it worthy to rerun a P-1 with the same bound if you find a factor in stage 1? (adding the factor found, obviously)
Last fiddled with by firejuggler on 2022-01-15 at 12:44 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to optimize the sieving stage of QS? | Ilya Gazman | Factoring | 6 | 2020-08-26 22:03 |
Placeholder: When is it legal to torrent BBC tv stuff? | kladner | Lounge | 3 | 2018-10-01 20:32 |
Future project direction and server needs synopsis | gd_barnes | No Prime Left Behind | 6 | 2008-02-29 01:09 |
Unreserving exponents(these exponents haven't been done) | jasong | Marin's Mersenne-aries | 7 | 2006-12-22 21:59 |
A distributed-computing project to optimize GIMPS FFT? Genetic algorithms | GP2 | Software | 10 | 2003-12-09 20:41 |