20101210, 02:14  #23  
Jun 2003
23×233 Posts 
Quote:
Quote:


20101210, 02:24  #24  
Aug 2006
1011101011011_{2} Posts 
Quote:


20101210, 06:17  #25  
May 2010
763_{8} Posts 
Quote:
http://www.mersenneforum.org/showpos...&postcount=152 "487W for GTX 295 under full load!" The Phenom II system I have right now draws ~150 watts at full load. The reference for claim #4 is here: http://mersenneforum.org/showpost.ph...&postcount=379 "Consumer video cards are designed for gaming rather than technical computing, so they don't have as many errorchecking features." There's not enough data to provide more accurate figures. 

20101210, 06:27  #26  
May 2010
499 Posts 
Quote:
Quote:
http://mersenneforum.org/showpost.ph...&postcount=327 Here's what George has to say: http://mersenneforum.org/showpost.ph...&postcount=339 "if msft develops a CUDA LLR program then it will be modestly more powerful (in terms of throughput) than an i7  just like LL testing. From a project admin's point of view, he'd rather GPUs did sieving than primality testing as it seems a GPU will greatly exceed (as opposed to modestly exceed) the thoughput of an i7." But I'm done debating this issue; it's been beaten to death, and none of the users involved are going to change their minds. 

20101210, 06:37  #27  
A Sunny Moo
Aug 2007
USA (GMT5)
3·2,083 Posts 
Quote:
Right now, the only work available from k*2^n+1 prime search projects for GPUs is sieving. Thus, in order to keep the GPUs busy at all, we have to keep sieving farther and farther up in terms of n, which becomes increasingly suboptimal the further we depart from our LLR leading edge. If we had the option of putting those GPUs to work on LLR once everything needed in the forseeable future has been wellsieved, even if it's not quite the GPUs' forte, we could at least be using them for something that's needed, rather than effectively throwing away sieving work that can be done much more efficiently down the road. Anyway, that's my $0.02...not trying to beat this to death on this end either. 

20101210, 06:58  #28  
Aug 2010
664_{10} Posts 
Quote:


20101210, 07:48  #29 
A Sunny Moo
Aug 2007
USA (GMT5)
1869_{16} Posts 
Indeed, that is an option. However, speaking solely from the perspective of a project admin (that is, trying to maximize the utilization of resources within my own project), it would seem worthwhile to have GPU LLR as an optionso that if (say) you have a participant who wants to contribute with his GPU at NPLB but is not particularly interested in TPS, he can still have useful work to do. (Or vice versa.)

20101210, 14:20  #30  
Aug 2006
3×1,993 Posts 
Quote:


20101210, 18:05  #31  
May 2010
1F3_{16} Posts 
Quote:
"in a worstcase (for the GPU) scenario, you still need all cores of your i7 working together to match its output! In a bestcase scenario, it's closer to twice as fast as your CPU." 

20101210, 18:56  #32 
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
1760_{16} Posts 
Please remember that your i7 cpu can be running as well as the GPU app on most cores without much more power comsumption.

20101210, 19:16  #33  
P90 years forever!
Aug 2002
Yeehaw, FL
1111011001011_{2} Posts 
Quote:
A different conclusion is also possible: Perhaps prime95's TF code is in need of optimization. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
mfaktc: a CUDA program for Mersenne prefactoring  TheJudger  GPU Computing  3541  20220421 22:37 
Do normal adults give themselves an allowance? (...to fast or not to fast  there is no question!)  jasong  jasong  35  20161211 00:57 
Find Mersenne Primes twice as fast?  Derived  Number Theory Discussion Group  24  20160908 11:45 
TPSieve CUDA Testing Thread  Ken_g6  Twin Prime Search  52  20110116 16:09 
Fast calculations modulo small mersenne primes like M61  Dresdenboy  Programming  10  20040229 17:27 