![]() |
Now with graphs.
|
[QUOTE=James Heinrich;412169]Now with graphs.[/QUOTE]
Unbelievably cool!!! Thanks James. :smile: P.S. Sorry for this, but how's the P-1 query going? :wink: |
[QUOTE=chalsall;412174]how's the P-1 query going? :wink:[/QUOTE]:max:
More seriously, perhaps you can break down more specifically what you'd like to see on the P-1 data. Mersenne.info doesn't have any P-1 data that I saw so I can't copy your good ideas from there. If you just want to know how many exponents in each range have had P-1 done I should be able to extract that data reasonably easily. If you want something more than that, please explain to me [i]before[/i] I start working on it :smile: |
[QUOTE=James Heinrich;412177]More seriously, perhaps you can break down more specifically what you'd like to see on the P-1 data. Mersenne.info doesn't have any P-1 data that I saw so I can't copy your good ideas from there.
If you just want to know how many exponents in each range have had P-1 done I should be able to extract that data reasonably easily. If you want something more than that, please explain to me [i]before[/i] I start working on it :smile:[/QUOTE] Mersenne.info never tracked P-1 progress nor deltas. In all honesty, when I started mersenne.info P-1 was almost non existent. The issue we are now facing is we are (as a community) seriously overpowered with P-1, so tracking candidates without P-1 compared to those with P-1 done is important for our trending and resource management. To answer your question... Very similar to your TF'ing reports; how many candidates were TFed, how many candidates were P-1'ed (at what bit level), how many candidates were DC'ed (at what bit level), how many candidates were LL'ed (at what bit level). This will help us all balance our collective resources to obtain optimization. I hope that makes sense; please let me know if it doesn't. |
Your example seems to have many dimensions. Perhaps you could help me visualize that better with a sample table of data you'd like to see? Do you want to see all that data in one table? Or is that 4 separate reports?
I'm not sure how easy it is parse my data for what bitlevel the exponent was at when it was P-1'd (or LL'd), I'll have to investigate. |
As I understand it, the problem at hand is knowing how many unassigned exponents have no P-1, and what TF bit-level those exponents are at, and how fast those assignments are being consumed.
So it could be as simple as adding an option at the top to show only unassigned assignments without P-1. The table and graph could show changes exactly like they currently do for TF, with and without deltas. |
Another way of thinking of it is a flag you can toggle to view the TF/LL reports for exponents that have been P-1'd, not P-1'd or the whole set.
|
[QUOTE=Mark Rose;412192]As I understand it, the problem at hand is knowing how many unassigned exponents have no P-1, and what TF bit-level those exponents are at...[/QUOTE]
Do you guys need a list of suboptimally TF'd stuff that hasn't had any P-1 work (and is available)? Is that the main thing needed? I can't really help much with the rate at which things are being processed, but I can generate lists like this (just the top 50 here): [CODE]exponent TFBits 80010001 73 80012903 71 80012917 71 80012923 71 80012957 71 80013163 72 80013253 72 80013257 72 80013347 71 80013397 72 80013749 71 80013827 71 80013877 71 80013979 71 80014027 71 80014537 71 80014601 71 80014663 71 80014673 71 80014703 72 80016107 71 80016121 71 80016199 71 80016281 71 80016359 71 80016613 71 80016623 71 80016647 71 80016667 71 80017087 71 80017097 71 80017099 71 80017187 71 80017243 71 80017627 71 80017649 71 80017739 71 80017837 71 80017871 71 80018023 72 80018041 72 80018327 72 80018333 72 80018377 72 80018381 72 80018767 71 80018857 72 80018933 72 80018963 72 80019227 72[/CODE] PS - That's using this little bit of clause for TF bit levels... I didn't know what the 80M+ stuff would be, ideally, thus the 75 bit for everything 80M+ [CODE] ((exponent between 0 and 40e6 and no_factor_to_bits<71) OR (exponent between 40e6 and 50e6 and no_factor_to_bits<72) OR (exponent between 50e6 and 65e6 and no_factor_to_bits<73) OR (exponent between 65e6 and 80e6 and no_factor_to_bits<74) OR (exponent between 80e6 and 999e6 and no_factor_to_bits<75))[/CODE] |
I don't have easy access to which exponents are assigned or not, so if it's important to you guys to have the P-1 report based on that then I'll need to enlist [i]Madpoo[/i]'s help to generate some data for me.
It shouldn't be filtered by "suboptimal" TF, I'd want data on TF levels of all exponents, and not a list of exponents but just a count per range. Something like[code]SELECT COUNT(*) AS `howmany`, (FLOOR(`exponent` / 10000) * 10000) AS `10k_range` FROM `table` GROUP BY `10k_range`, `tf_bits`, `pm1_is_done`, `pm1_is_available`, `pm1_is_assigned`[/code]The last 3 boolean columns should always have one and only one of the 3 set to true -- it's either done, or it's currently assigned, or it's not-done and not-assigned therefore available. Could just as well be represented by a single field (e.g. 0=done, 1=assigned, 2=available) @Aaron: how heavy a query is something like that? |
[QUOTE=Madpoo;412209]Do you guys need a list of suboptimally TF'd stuff that hasn't had any P-1 work (and is available)? Is that the main thing needed? I can't really help much with the rate at which things are being processed, but I can generate lists like this (just the top 50 here):[/QUOTE]
No, it's more estimating how much TF can be done before PrimeNet runs out of higher-TFed exponents to hand out for P-1. For instance, recently GPU72 was holding on to a bunch of exponents at 73 bits, but PrimeNet started handing out exponents at 71 bits. It would have been better to release back some of those 73 bit TFed exponents. Ideally we want to TF them to 75 bits before releasing the exponent back to PrimeNet for P-1, but we don't have the throughput for that. The problem is knowing how big of a buffer of less than 75 but better than 71 TF level exponents to leave at PrimeNet. If we know we need 300 exponents per day, Chris can program GPU72 to make sure an appropriate amount are left with PrimeNet at say 74 bits or even 73 bits, based on the available TF capacity. So having the deltas is the most important thing. |
[QUOTE=Mark Rose;412213]For instance, recently GPU72 was holding on to a bunch of exponents at 73 bits, but PrimeNet started handing out exponents at 71 bits. It would have been better to release back some of those 73 bit TFed exponents.[/QUOTE]
Actually, what happened there is GPU72 /was/ releasing some at 73 bits when it "observed" that candidates were about to be assigned for P-1'ing at below that (up in the 80M range). But because it only "looks" at the situation every five minutes and Mr. P-1, and then later aurashift, were reserving candidates in such large batches, it wasn't keeping up with the releasing. [QUOTE=Mark Rose;412213]If we know we need 300 exponents per day, Chris can program GPU72 to make sure an appropriate amount are left with PrimeNet at say 74 bits or even 73 bits, based on the available TF capacity. So having the deltas is the most important thing.[/QUOTE] It's actually a bit more complicated than that, because of the non-linear nature of the assignment requests. Normally it's pretty steady, but sometimes large batches are requested. The ideal is to pull far enough ahead that we've got a comfortable buffer of candidates at 75 bits for both the P-1'ers, and separately, the LL'ers. Having the deltas allow us to make an informed decision as to how many we can take to 75, and how many we should only take to 74. My goal is to not have assigned for LL'ing anything not at at least 75 bits above 65M. GPU72 will "recapture" anything released for P-1 at below 75 bits for final TF'ing, assuming a factor isn't found by the P-1 run of course. |
All times are UTC. The time now is 17:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.