![]() |
![]() |
#23 | ||
Jun 2003
22·3·449 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#24 | |
Aug 2006
10111010111012 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#25 | |
May 2010
7638 Posts |
![]() Quote:
http://www.mersenneforum.org/showpos...&postcount=152 "487W for GTX 295 under full load!" The Phenom II system I have right now draws ~150 watts at full load. The reference for claim #4 is here: http://mersenneforum.org/showpost.ph...&postcount=379 "Consumer video cards are designed for gaming rather than technical computing, so they don't have as many error-checking features." There's not enough data to provide more accurate figures. |
|
![]() |
![]() |
![]() |
#26 | ||
May 2010
499 Posts |
![]() Quote:
Quote:
http://mersenneforum.org/showpost.ph...&postcount=327 Here's what George has to say: http://mersenneforum.org/showpost.ph...&postcount=339 "if msft develops a CUDA LLR program then it will be modestly more powerful (in terms of throughput) than an i7 -- just like LL testing. From a project admin's point of view, he'd rather GPUs did sieving than primality testing as it seems a GPU will greatly exceed (as opposed to modestly exceed) the thoughput of an i7." But I'm done debating this issue; it's been beaten to death, and none of the users involved are going to change their minds. |
||
![]() |
![]() |
![]() |
#27 | |
A Sunny Moo
Aug 2007
USA (GMT-5)
11000011010012 Posts |
![]() Quote:
Right now, the only work available from k*2^n+-1 prime search projects for GPUs is sieving. Thus, in order to keep the GPUs busy at all, we have to keep sieving farther and farther up in terms of n, which becomes increasingly suboptimal the further we depart from our LLR leading edge. If we had the option of putting those GPUs to work on LLR once everything needed in the forseeable future has been well-sieved, even if it's not quite the GPUs' forte, we could at least be using them for something that's needed, rather than effectively throwing away sieving work that can be done much more efficiently down the road. Anyway, that's my $0.02...not trying to beat this to death on this end either. |
|
![]() |
![]() |
![]() |
#28 | |
Aug 2010
2×5×67 Posts |
![]() Quote:
![]() |
|
![]() |
![]() |
![]() |
#29 |
A Sunny Moo
Aug 2007
USA (GMT-5)
186916 Posts |
![]()
Indeed, that is an option.
![]() |
![]() |
![]() |
![]() |
#30 | |
Aug 2006
5,981 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#31 | |
May 2010
499 Posts |
![]() Quote:
"in a worst-case (for the GPU) scenario, you still need all cores of your i7 working together to match its output! In a best-case scenario, it's closer to twice as fast as your CPU." |
|
![]() |
![]() |
![]() |
#32 |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
23×7×107 Posts |
![]()
Please remember that your i7 cpu can be running as well as the GPU app on most cores without much more power comsumption.
|
![]() |
![]() |
![]() |
#33 | |
P90 years forever!
Aug 2002
Yeehaw, FL
795810 Posts |
![]() Quote:
A different conclusion is also possible: Perhaps prime95's TF code is in need of optimization. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
mfaktc: a CUDA program for Mersenne prefactoring | TheJudger | GPU Computing | 3541 | 2022-04-21 22:37 |
Do normal adults give themselves an allowance? (...to fast or not to fast - there is no question!) | jasong | jasong | 35 | 2016-12-11 00:57 |
Find Mersenne Primes twice as fast? | Derived | Number Theory Discussion Group | 24 | 2016-09-08 11:45 |
TPSieve CUDA Testing Thread | Ken_g6 | Twin Prime Search | 52 | 2011-01-16 16:09 |
Fast calculations modulo small mersenne primes like M61 | Dresdenboy | Programming | 10 | 2004-02-29 17:27 |