View Single Post
Old 2020-02-28, 20:07   #434
diep's Avatar
Sep 2006
The Netherlands

26·11 Posts

Originally Posted by storm5510 View Post
PG is PrimeGrid, I take it?

My Nash tables exclude any Nash value < 1,000. The tables currently go up to k = 924,000. They are divided into blocks averaging 65K bytes per file. If there is anything you would like to have, I can send it along.

I have a GTX 1080 in my i7 system which has not been used for anything in months. It would be nice to apply it to this project area. No GPU application program exists yet that I am aware of. Something similar to LLR would be nice.
If you have nashtables calculated that would be nice to see as i want to pickout a few low weight k's (not too low weight) and sieve those deep.

I wrote that gpgpu code myself in CUDA in 2016 - it's not ready for production yet as lacked priority to finish it.

GTX1080 is similar to some hundreds of cores newpgen there.

I've got a Titan Z here - also has some punch in DP.

The FFT implementation mine only exists on a cpu right now and for gpu only on paper. Sieving on gpu would be finished first with several kernels.

In all cases i go for throughput rather than latency. So running several tests at same time - to use all calculation power of the gpu rather than try to do 1 exponent as fast as possible.

Means effectively a single exponent (or a bunch) runs (run) within a single SIMD and has a very limited number of warps working at the same time for it.

So it's total different approach from what Nvidia releases.

edit: and it might not work for mersenne at all as those transforms are so huge that with that many exponents you would eat too much memory from the GPU's device RAM.

Last fiddled with by diep on 2020-02-28 at 20:10
diep is offline   Reply With Quote