20200721, 00:50  #12 
Oct 2019
United States
2·29 Posts 
Thank you nemonusquam. This is exactly what I was not considering. Thank you.
Last fiddled with by jwnutter on 20200721 at 00:51 
20200721, 01:23  #13  
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
20622_{8} Posts 
Quote:
Quote:
Search the forum for answers or ask questions. 

20200721, 15:18  #14  
Oct 2019
United States
2·29 Posts 
Quote:
That said, it seems a bit misleading  to me anyway. As an example, if the current goal is to factor exponents in the 111M range to a bit level of 77 (let's make this assumption for this example) and the exponent 111,000,007 was factored to 77 with no factors (however, I realize this is a false statement) and 111,000,031 to a bit level of 75 with no factors wouldn't 111,000,007 need to appear in a list of available exponents to TF twice (once for a bit level of 76 and then again for a bit level of 77) to reach the desired end state of having all 111M exponents factored to a bit level of 77? However, based on my current understanding (which could be very inaccurate), both exponents (111,000,007 and 111,000,031) would appear in the Assigned TF field once when only one exponent (111,000,007) was "fully" factored to 77. This is all really cool stuff. Once I have a quasicomplete understanding of the Exponent Status Distribution I'll work on developing my understanding of the data tables provided here: https://www.mersenne.ca/exponent/102918073. Thanks again, Uncwilly. You seem to be answering a lot of my questions on a variety of topics. When would you like to have that beer? 

20200721, 15:47  #15 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2^{3}×19×29 Posts 
The difference between PrimeNet goal (cpu trial factoring) and GPUto72 (gpu trial factoring) target levels is typically 4 bit levels, as in https://www.mersenne.ca/exponent/111000031
Arguments have been made that with the advent of RTX20xx and GTX16xx, the differential ought be increased to 5 bit levels. The bit level increment thought optimum for use of the same gpu, is log2(TF GhzD/day / primality testing GhzDday), or for RTX2080 Super, approx log2(3072/73.9) = log2(41.57) =5.38 bit levels. For Radeon VII it's log2( 1113.6 / 280.9) = 1.99 bit levels. A reasonable compromise when using each gpu type for what it's relatively best at is to do TF 45 bits additional on NVIDIA and P1 and primality testing on Radeon VII. Benchmarks from https://www.mersenne.ca/mfaktc.php and https://www.mersenne.ca/cudalucas.php) But in regard to jwnutter's question, why aren't each TF level of each exponent counted, in https://www.mersenne.org/primenet/ Word distribution map. a) there's no way to predict that total, other than some statistically informed guesses. b) think "a prime is a prime is a prime". Finding a factor early is good, saving a lot of further work, retiring the prime exponent from further consideration. Additional bit levels would not normally be trial factored in that case. P1 not performed. Primality testing not performed. Primality test verification not performed. Completing a TF bit level with no factor found does not eliminate an exponent from further consideration. It only crosses off a small bit of work from the to do list. c) not all TF levels are equal. In fact, they're approximately exponentially related. If doing 73 to 74 is one unit of effort, 74 to 75 is about twice as much, 75 to 76 about 4 times as much, 76 to 77 about 8 times as much, 77 to 78 about 16 times as much. d) the report is about exponent status, not smallestindividualassignmentpossible status There's much more about trial factoring at https://www.mersenneforum.org/showpo...23&postcount=6 Last fiddled with by kriesel on 20200721 at 16:00 
20200721, 17:00  #16  
If I May
"Chris Halsall"
Sep 2002
Barbados
3×7×443 Posts 
Quote:
Before Ben entered the space, the TF'ing effort was actually comfortably ahead of the LL'ing (and P1'ing) effort (at all four "wavefronts"), and pulling ever further ahead. Now we're "surfing the waves" quite tightly. Literally "just in time" in some situations. 

20200721, 17:59  #17  
Oct 2019
United States
58_{10} Posts 
Quote:
Quote:
Quote:
Quote:
https://www.techradar.com/news/nvidiaampere https://www.techradar.com/news/nvidi...veamdworried https://www.techradar.com/news/rtx3080 https://www.pcgamer.com/nvidiaamper...otagingwell/ https://www.pcgamer.com/nvidiaamper...sperformance/ Quote:


20200721, 18:04  #18  
Oct 2019
United States
2×29 Posts 
Quote:


20200721, 18:28  #19  
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2×4,297 Posts 
Quote:
As the p goes up, the number of potential factors below a given size goes down. That is why we factor we can factor them to higher bit levels quicker. Add that to Ken's formula for how much effort to apply and the bit levels really zoom up. As seen here in the smooth curved lines: https://www.mersenne.ca/graphs/facto...M_20200721.png 

20200721, 19:39  #20  
Oct 2019
United States
2×29 Posts 
Quote:
I think I understand part of this chart, but what do the lines represent (green, blue, red, and black scatter). Based on the sudden decline at 100M I'm assuming the black scatter represents known factor bit levels, which appears to match up with current TF assignments here https://www.mersenne.org/primenet/. But I still don't understand the other curves (green, blue, and red). And I guess I don't fully understand this comment: "That is why we factor we can factor them to higher bit levels quicker." Does this mean that as the exponent gets larger specific lower bit levels are ignored? (insert mind exploding emoji here). You bring up an interesting point that I was going to ask this group the other day but forgot (and now I'm getting a bit off topic  my apologies). Earlier this month when testing my GPU I noticed that I was able to get around 4,000 ghzd/d when TF'ing exponents in the 333M range to a bit level of 75. This output is about 15% higher than what I'm seeing today while testing 111M exponents to 74. I'm not sure why, but for some reason this seemed very strange to me. Any thoughts? 

20200721, 20:35  #21  
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
10000110010010_{2} Posts 
Quote:
https://www.mersenne.ca/graphs/facto...M_20200721.png The lines represent different goal levels. The orangeish red line is the level that the Prime95 program would take and exponent to. The blue line is the 23 bits more than that that the average GPU should do. The green line is the 5 bits higher that Ken talked about. The black dots show where the exponents currently are. That drop off is the riding the wave that Chris (chalsall) was talking about. If we could not do all of them up to desired bit level to keep ahead of the wave, we could dump a bunch at 1 or 2 bits lower to keep the beast fed. Let's look at some specific examples for the bit levels. 2^33219371 (The first exponent when expanded is 1,000,000 decimal digits) 2^332192831 (The first exponent when expanded is 10,000,000 decimal digits) 2^3321928311 (The first exponent when expanded is 100,000,000 decimal digits) The smallest possible factor for each is 2*1*p+1 That red 1 is the smallest 'k' value possible. Doing the math that is (in binary) 110 0101 0110 0000 1010 0011 for the first number (23 bits) 11 1111 0101 1100 0101 1010 0111 (26 bits) for our second example. 10 0111 1001 1001 1011 1000 0111 1111 (30 bits) for our third example. Those are our starting levels. 1111 0111 1000 0000 1100 0001 0001 0011 1001 (36 bits, 66,438,566,201 base 10) is the size of k=100 for our third number. So, there is no point (and no way in the method being used) to check for 29 bit factors for our third number. So we are starting 7 bits higher than for the first example. There are other math tricks to eliminate potential factors with out testing them too. Quote:
And BTW, the really high factors are being found by P1 which can find some crazy big factors. 

20200724, 09:10  #22 
Romulan Interpreter
Jun 2011
Thailand
2×11×397 Posts 
That is true, and expected. The reason is that the most of the cards sold today are overclocked (factory, or by the user) compared with chip specs. So, when James receives those benchmark figures, they fill all the OC and UC spectrum (edit: yes, some people also UC, the reason is energy saving, better efficiency, less heat, less noises), therefore the has to "scale" them with clocking. So, the tables have to be read like "assuming your card runs at the clock specified in its datasheet". Most probably, your card is factoryOC by those 15%20% (true for "super" cards). If you watercool and OC (as in my case) then you can find differences up to 30% and even 50% compared with those tables (depending on the card, system, etc).
Last fiddled with by LaurV on 20200724 at 09:17 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Milestones vs. Exponent Status Distribution  heliosh  Information & Answers  6  20200720 19:27 
NFS@Home "Status of numbers" page to update  pinhodecarlos  NFS@Home  2  20150704 11:18 
Primenet exponent status distribution archived data  James Heinrich  Data  2  20120201 21:14 
suggestion: "check exponent status" page  ixfd64  Lounge  3  20040527 00:51 
anyone ever read "Mathematical Mysteries..." by Calvin. C. Clawson?  ixfd64  Lounge  0  20031014 23:04 