Go Back > Great Internet Mersenne Prime Search > Hardware

Thread Tools
Old 2006-03-25, 04:55   #1
Mar 2006

11 Posts
Default Graphic Card usable for Prime?

nvidia wants to use graphic cards to calculate the phsical part in PC games.

When does Prime95 follow this idea and use the GPU for the math? Is it possible?

Riza is offline   Reply With Quote
Old 2006-03-25, 05:18   #2
moo's Avatar
Jul 2004

809 Posts

there are those physix cards being sold currently and they must have some wicked power because i saw things not even a 10000 dollar machine could do in a video game in gameplay.
moo is offline   Reply With Quote
Old 2006-03-25, 05:20   #3
dsouza123's Avatar
Sep 2002

66210 Posts

This has been brought up before,
the issue is the precision/size of the numbers (data formats) available.

The GPUs have 32 bit single precision floating point (earlier 16 bit integer)
but 64 bit double precision is needed at least for the FFT routines
used by the Lucas-Lehmer algorithm with Prime95.

If the smaller range of numbers available with 32 bit floats can be
used by a math algorithm then the GPU could be used.

If the algorithm is enhanced by the parallel processing of the GPU then
it could have very good performance.

The physics calculations done on the nVidia GPUs in SLI could also
be done by the AGEIA PhysX Processor ( PPU ) that is starting to ship.
dsouza123 is offline   Reply With Quote
Old 2006-03-25, 19:24   #4
cheesehead's Avatar
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts

As I wrote in an earlier thread:
The potential use of graphics cards to do L-L testing or factoring has been discussed several times in the Hardware forum. (Search on "graphics" or "video" there.)

The main reason video cards are not suitable for GIMPS work is that GIMPS calculations require high precision and so use double-precision floating-point arithmetic, but video cards operate with only single-precision floating-point numbers. Single-precision FP is all that video work requires, so it is unlikely that any future video cards will be able to perform double-precision FP, not matter how advanced their other capabilities or speed.

Q: Why not use single-precision?
A: It has to do with the technical details of FFT arithmetic, especially the need to guard against losing low-order result bits because of FP rounding/truncation. For more information, search the Math forum threads discussing this subject.
cheesehead is offline   Reply With Quote
Old 2006-03-27, 00:20   #5
wblipp's Avatar
May 2003
New Haven

3·787 Posts

Has anybody looked into Trial Factoring on a GPU? It seems like the data needs of that task are a better match for the hardware.
wblipp is offline   Reply With Quote
Old 2006-04-02, 22:27   #6
nucleon's Avatar
Mar 2003

5·103 Posts

Looks like the dedicated physics cards won't be suitable for LL work either.

According to:

The PPUs from Ageia are "optimized for 32-bit floating-point math"

(and yeah the demos using the PPUs looks absolutely awesome)

-- Craig
nucleon is offline   Reply With Quote
Old 2006-05-29, 16:56   #7
JHagerson's Avatar
May 2005
Naperville, IL, USA

22·72 Posts

Yes, 32-bit math only. Here is a link to the library distribution.
JHagerson is offline   Reply With Quote
Old 2006-09-21, 05:23   #8
RMAC9.5's Avatar
Jun 2003

32×17 Posts
Default 64 bit Floating Point Math on ATI R580 graphics cards

I recently saw a reference to which describes the performance increase that is possible using their software libraries and ATI R580 graphics cards. According to this web site they support 64 bit floating point math for high performance computing needs using C and/or C++. They are also offering a no cost evaluation program for Linux workstation users.
RMAC9.5 is offline   Reply With Quote
Old 2006-10-02, 16:12   #9
guido72's Avatar
Aug 2002
Rovereto (Italy)

3×53 Posts

Something new under the sun?
Or are we always facing with double precision iussues?
guido72 is offline   Reply With Quote
Old 2006-11-09, 10:18   #10
Cruelty's Avatar
May 2005

23·7·29 Posts

How about latest nVidia product?
  • Full 128-bit floating point precision through the entire rendering pipeline

Last fiddled with by Cruelty on 2006-11-09 at 10:20
Cruelty is offline   Reply With Quote
Old 2006-11-09, 11:38   #11
Dresdenboy's Avatar
Apr 2003
Berlin, Germany

192 Posts

What Nvidia means is actually the 4 component vector composed of 4 32bit floats. See as an example, how they handle the bit count:
128-bit floating point high dynamic-range (HDR) lighting with anti-aliasing

* 32-bit per component floating point texture filtering and blending
Dresdenboy is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Needed: Graphic artist Xyzzy Forum Feedback 3 2015-12-16 18:35
Intel GPU usable? tha Hardware 4 2015-07-28 15:31
"Prime Rewards" Credit Card ewmayer Puzzles 109 2014-11-22 13:29
Sieving Graphic masser Sierpinski/Riesel Base 5 10 2010-10-11 02:25
Is the Fast Hartley Transform usable in DWT? Dresdenboy Math 17 2003-08-12 19:09

All times are UTC. The time now is 17:50.

Tue Apr 13 17:50:27 UTC 2021 up 5 days, 12:31, 1 user, load averages: 4.85, 5.03, 4.90

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.