![]() |
![]() |
#1 |
"Mark"
Apr 2003
Between here and the
22×7×223 Posts |
![]()
Could you stop using PII/400 days and use something more meaningful for most users?Besides, some of those numbers are so large that it could intimidate some potential participants.
|
![]() |
![]() |
![]() |
#2 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2·2,909 Posts |
![]()
I would suggest Haswell/GHz days or something like that. Another alternative would be Core 2/GHz days as that would match primenet.
|
![]() |
![]() |
![]() |
#3 |
Banned
"Luigi"
Aug 2002
Team Italia
26·3·52 Posts |
![]()
I decided to split the discussion into a new thread...
|
![]() |
![]() |
![]() |
#4 |
Banned
"Luigi"
Aug 2002
Team Italia
26·3·52 Posts |
![]()
When we decide to change the time unit for the project we face two different issues:
1 - Choose a new productivity index for all the software involved into the project 2 - Change/translate all the previous results using a new unit Both issues have their sub-issues: 1a - Different software has different optimizations: if we want an exact estimate of the time, we need to test each program with different ranges (like I did in the productivity page, http://www.fermatsearch.org/productivity.html ). 1b - Different software has different algorithms: I never tested either gmp-ecm or mprime for speed: I just extrapolate their speed from the apparent time required to complete curves, based on the completion time of fermat.exe 1c - I also based the productivity of mmff.exe on the time needed by fermat.exe to complete the same range, and not on the effective speed of the software; the same is true for gmp-fermat 1d - We have 2 different versions of gmp-fermat: one with assembly routines (Brazel/Reynolds) and one without, running on Linux and Windows; would it be possible to have just one final (say 3.0) version unified for all? 1e - We also have 2 versions of FermFact: the v 0.9 and the v 2.0. Do everyone agree to use the last version for the timing purposes? 1f - ppsieve and ppsieve-cuda have the same issue: should we use 1-thread timing of the CPU version? And what for the GPU version? 2a - Should we go for completion days or GHz/day? The question is due to the fact that we will use distinct hardware and OSes, and with GPUs the GHz/day unit may raise issues against the CPU GHz/days 2b - Should we translate old results as they are, using a multiplicative ratio (like we did when we computed 1 PII/400 day = 5.5 P90 days) or recompute them one by one using the timing used with the correct algorithm? 2c - What should we do if some results have been acquired with a different (more/less efficient) software? I.e. if I used fermat.exe on a range where mmff should have been used? 2d - There are AT LEAST 2 people (Maznichenko and Gostin) that are using their own factoring programs for CPU and a new GPU program: they may not like the idea of sharing their code or even compare them with standard code. If you mind to discuss a solution, I will most happily translate a solution for the pages ![]() Luigi |
![]() |
![]() |
![]() |
#5 | |
"Mark"
Apr 2003
Between here and the
22×7×223 Posts |
![]() Quote:
1e - yes. 1f - is that faster than fermfact? 2a - It is just an estimate, so it does it really matter that much? I think that GHz days makes the most sense considering multiple cores. For software that can use the GPU, there is too much variability in GPU speeds. Is there a "middle of the road GPU" that can be used? 2b - I personally don't care. 2c - The user gets credit for the software they are "supposed" to use. It is their loss/gain to use something slower/faster, but if they find something faster, then they could let everyone else know about it. 2d - I would ask them if they would be willing to share either an exe or source. I see little reason for being secretive. There isn't any money or fame associated with finding Fermat factors. Maybe their software has a bug. Maybe their software has room for improvement. There is only one way to find out. If their software is really that much faster than anyone else's, then it would benefit the project greatly. IMO, it is more important to know how much time a range might take than any stats. Maybe the calculation should really be based upon the range of k and size of n. |
|
![]() |
![]() |
![]() |
#6 | |
Banned
"Luigi"
Aug 2002
Team Italia
26·3·52 Posts |
![]() Quote:
![]() 1f - I found ppsieve often faster than FermFact, but I had a fast GPU. Considered that I have no machine with Windows to run FermFact, it's quite good for me. We may let the actual stats system for phase one, just attacking the "completion time / days" formula. Once we have something where the majority agrees, we may step to phase 2 and rework the whole stat system. Now, I have a Intel® Pentium(R) CPU G2030 @ 3.00GHz × 2 processor (I guess it's like a Sandy Bridge or a Ivy Bridge) and a GTX 980 GPU. I can retest the Linux software I have on the N values present on the Productivity page to create a new set of stats indexed on k/sec (we may as well use a graph). Someone should be so nice to test the same data on Windows using fermat.exe 4.4 and FermFact (I am prone to abandon Proth testing, but if you [generic] like to test it as well, you are welcome!). The testing with siever/prover is another smaller issue: I propend to sieve a range until the "prover" (pfgw) becomes faster at eliminating ks. Note that for completeness sake, I tested FermFact/pfgw even for lower/single Ns, where maybe the couple NewPGen/pfgw would have been faster: It will be added on the next test case. Also, I did no test with LLR, PRP and SrSieve. Should they be added? Finally, a question related to prime95/prime64/mprime in ecm mode and gmp-ecm, considering I suggest the use of ecm for 12 <= N <= 29. I am going to test mprime using the B1 bounds suggested by George Woltman on http://www.mersenne.org/report_ecm/ using B2 = 100*B1 as always suggested. I am also (going to) time test gmp-ecm using the standard proposed B2 where memory serves . Once done, I will add the efficiency of each program ( http://www.fermatsearch.org/factors/programs.php ) to this table, and update the testing page. I will use days/hours/minutes/seconds timed on my CPU and GPU. Once the matrix is completed, we will opt for a more universal an convenient unit. Does it all look feasible? Luigi Last fiddled with by ET_ on 2016-07-16 at 09:48 |
|
![]() |
![]() |
![]() |
#7 |
Banned
"Luigi"
Aug 2002
Team Italia
26·3·52 Posts |
![]()
I am working on a new timing matrix, as you can see from the attached xls file.
Help is not strictly required, but welcome, especially for the ecm section; should you choose to run a (couple of) curves with Prime95/mprime, please give the precedence to the ones with yellow background. In case you prefer gmp-ecm, please also record the software version (6.3? 6.4? 7.0?) and the chosen B2 for the usual B1. Windows software timings are also welcome, when possible (consider that I alerady extrapolated the Proth.exe results). I will wait for gmp-fermat 3.0 to complete the table (some results are actually an extrapolation of the old ones, and I would like to recheck them). The "most wanted" page has been updated with the new timing tables, and look a lot more "doable" now... ![]() Please keep your ideas and tests coming! Luigi Last fiddled with by ET_ on 2016-07-16 at 14:16 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Choosing the best polynomial | wombatman | Msieve | 123 | 2013-08-27 11:27 |
Total Project Time | paulunderwood | Five or Bust - The Dual Sierpinski Problem | 2 | 2011-02-17 00:34 |
Help choosing motherboard please. | Flatlander | GPU Computing | 4 | 2011-01-26 08:15 |
Time to complete project | Citrix | Prime Sierpinski Project | 5 | 2006-01-09 03:45 |
65 bit to 66 bit unit time expansion | Jwb52z | Software | 1 | 2002-12-20 07:15 |