![]() |
![]() |
#12 | |
Einyen
Dec 2003
Denmark
2·32·191 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#13 |
"Robert Gerbicz"
Oct 2005
Hungary
32·179 Posts |
![]()
The TF bit level for Mn is about k=log(n)*3/log(2).
|
![]() |
![]() |
![]() |
#14 |
Dec 2007
Cleves, Germany
2×5×53 Posts |
![]() |
![]() |
![]() |
![]() |
#15 |
Einyen
Dec 2003
Denmark
2×32×191 Posts |
![]()
If you look at the data from 66 bit to 80 bit it can be fitted with:
bitdepth = 22.94*exponent0.0623 or bitdepth = 10.4428*log(exponent) - 11.15 |
![]() |
![]() |
![]() |
#16 |
Dec 2003
23·33 Posts |
![]()
I wonder what hardware those bit levels were calculated on. I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing. One factor found by trial factoring saves two LL tests. For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test. P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z. I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.
|
![]() |
![]() |
![]() |
#17 | |
"Jacob"
Sep 2006
Brussels, Belgium
3×5×127 Posts |
![]() Quote:
What could be done though is assign TF up to 63 bits to AMD64s and Intel 64 CPUs. Because at those depths these CPU/software combination are really making a difference. But again is it worth the complexity ? Concerning P-1 I am of the opinion that all exponents should be done to their ideal level, not a level determined by the available memory. When you receive doublechecks some are done to limits half that of others, some exponent did not even have a stage 2 done... The problem from PrimeNet's point of view is that the software would not be as invisible (trying to grab to much memory and trying a stage 2 with only 8MB of memory assigned, well...) Jacob |
|
![]() |
![]() |
![]() |
#18 | |||||
"Richard B. Woods"
Aug 2002
Wisconsin USA
22·3·641 Posts |
![]()
The levels aren't very sensitive to hardware type. Modest differences in TF efficiency across hardware don't make much difference when the steps are powers of 2.
But since you asked, here's the relevant part of v25.2 source module commonc.h: (Answer: 2.0 GHz P4 Northwood) Code:
/* Factoring limits based on complex formulas given the speed of the */ /* factoring code vs. the speed of the Lucas-Lehmer code */ /* As an example, examine factoring to 2^68 (finding all 68-bit factors). */ /* First benchmark a machine to get LL iteration times and trial factoring */ /* times for a (16KB sieve of p=35000011). */ /* We want to find when time spend eliminating an exponent with */ /* trial factoring equals time saved running 2 LL tests. */ /* runs to find a factor (68) * #16KB sections (2^68-2^67)/p/(120/16)/(16*1024*8) * factoring_benchmark = 2.0 * LL test time (p * ll_benchmark) simplifying: 68 * (2^68-2^67)/p/(120/16)/(16*1024*8) * facbench = 2 * p * llbench 68 * 2^67 / p / (120/16) / 2^17 * facbench = 2 * p * lltime 68 * 2^49 / p / (120/16) * facbench = p * lltime 68 * 2^49 / (120/16) * facbench = p^2 * lltime 68 * 2^53 / 120 * facbench = p^2 * lltime 68 * 2^53 / 120 * facbench / lltime = p^2 sqrt (68 * 2^53 / 120 * facbench / lltime) = p */ /* Now lets assume 30% of these factors would have been found by P-1. So we only save a relatively quick P-1 test instead 2 LL tests. Thus: sqrt (68 / 0.7 * 2^53 / 120 * facbench / lltime) = p */ /* Now factor in that 35000000 does 19 squarings, but 70000000 requires 20. Thus, if maxp is the maximum exponent that can be handled by an FFT size: sqrt (68 / 0.7 * 2^53 / 120 * facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime) = p */ /* Now factor in that errors sometimes force us to run more than 2 LL tests. Assume, 2.04 on average: sqrt (68 / 0.7 * 2^53 / 120 * facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime / 1.02) = p */ /* These breakeven points we're calculated on a 2.0 GHz P4 Northwood: */ #define FAC80 516000000L #define FAC79 420400000L #define FAC78 337400000L #define FAC77 264600000L #define FAC76 227300000L #define FAC75 186400000L #define FAC74 147500000L #define FAC73 115300000L #define FAC72 96830000L #define FAC71 75670000L #define FAC70 58520000L #define FAC69 47450000L #define FAC68 37800000L #define FAC67 29690000L #define FAC66 23390000L /* These breakevens we're calculated a long time ago on unknown hardware: */ #define FAC65 13380000L #define FAC64 8250000L #define FAC63 6515000L #define FAC62 5160000L #define FAC61 3960000L #define FAC60 2950000L #define FAC59 2360000L #define FAC58 1930000L #define FAC57 1480000L #define FAC56 1000000L Quote:
Quote:
Quote:
Quote:
Quote:
Last fiddled with by cheesehead on 2008-11-03 at 02:13 |
|||||
![]() |
![]() |
![]() |
#19 | |
Dec 2007
Cleves, Germany
10228 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#20 |
"Jacob"
Sep 2006
Brussels, Belgium
3·5·127 Posts |
![]()
Indeed. I goofed up there.
Jacob |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Normalising rent levels | Bundu | Math | 4 | 2017-09-27 06:14 |
Racism or low light levels or...? | jasong | jasong | 2 | 2016-09-25 05:07 |
Missing bit levels? | NBtarheel_33 | Data | 6 | 2016-05-31 15:27 |
skipped bit levels | tha | PrimeNet | 151 | 2016-03-17 11:38 |
Is the data missing or did we miss a couple TF bit levels | petrw1 | PrimeNet | 2 | 2015-05-07 05:09 |