![]() |
![]() |
#1 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·7·263 Posts |
![]()
This thread is intended to hold only reference material specifically for gpus, specifically discrete gpus, which are typically in PCIe physical format.
(Suggestions are welcome. Discussion posts in this thread are not encouraged. Please use the reference material discussion thread http://www.mersenneforum.org/showthread.php?t=23383. Off-topic posts may be moved or removed, to keep the reference threads clean, tidy, and useful.) Table of contents
Last fiddled with by kriesel on 2020-07-10 at 02:30 |
![]() |
![]() |
#2 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
736410 Posts |
![]()
These were gathered mostly from NVIDIA spec sheets.
Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2019-11-18 at 13:58 Reason: added links and gpu models |
![]() |
![]() |
#3 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
163048 Posts |
![]()
These are from James Heinrich's benchmarks pages, and GPU-Z output (some from henryzz or kladner)
Techpowerup has a terrific gpu database at https://www.techpowerup.com/gpu-specs/ Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2019-11-18 at 13:59 Reason: added gpus and formatting |
![]() |
![]() |
#4 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22×7×263 Posts |
![]()
The attached small table is organized by GPU model. There's a big table by compute capability level at https://en.wikipedia.org/wiki/CUDA#GPUs_supported which will show what CUDA level provides what compute capability level, and what compute capability level is required for given NVIDIA GPU models.
The practical effect is there's a minimum driver version at which a given new compute level is supported, and eventually a maximum driver version beyond which support for an old compute capability level is dropped. Attempts to install, compile, run, debug, etc, that are outside those limits on driver version will fail. This has restrictive consequences for systems which contain both old and new GPUs. There are some GPUs (Pascal arch.; GTX10xx) that require CUDA8 capable drivers, others (Volta arch.) CUDA 9.x, others (Turing arch.; RTX20XX etc) CUDA 10. CUDA 6.5 was the last to support compute capability 1.x GPUs (Tesla arch.). CUDA 8 is the last version to support compute capability 2.x GPUs (Fermi arch.; GTX4xx, Quadro 2000, Quadro 4000, Quadro 5000, Tesla C2075, etc). NVIDIA allows only one NVIDIA driver installed per system. Therefore, GTX4xx (CUDA <9) and RTX20xx (CUDA 10 minimum) can not be run in the same system at the same time. Operating system is also a consideration. Old operating systems are not supported by new driver releases, which come out to support new GPU models, and fix problems. Older GPU models get dropped also from newer driver releases. Second table attached shows relationship between driver release numbers and maximum supported CUDA levels and Windows OS versions. It also includes approximate release dates versus driver version. Note there are many more releases than listed. Note, the highest compatible CUDA level dll, driver, executable, etc, is not necessarily the highest performance for a given GPU. Also, it's possible that some indicated as Win10 max might run with Win11. Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2022-11-03 at 17:07 |
![]() |
![]() |
#5 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
736410 Posts |
![]()
How many GPUs can be run on one system?
I've run up to 4 NVIDIA or 4 AMD on Windows 7 or 10 Pro, on workstation systems intended to hold 1 or 2, or up to 5 AMD or 6 mixed on 6-slot motherboards. Available power in the chassis power supply and physical space are usually the limit in these. (Over time in some cases the workstations degraded to only being stable on 3 GPUs, and then 2. I suspect power supply aging.) But in carefully selected gear, one could go much higher. Cryptocoin miners routinely do so. See for example https://www.cryptocurrencyfreak.com/...thereum-zcash/ If I recall correctly, SELROC has some experience in high-gpu-count systems. Any of the following may be factors.
See also http://snucl.snu.ac.kr/ regarding OpenCL across a cluster of systems, etc. Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2023-01-02 at 18:44 Reason: updates |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
gpuOwL-specific reference material | kriesel | kriesel | 32 | 2022-08-07 17:06 |
clLucas-specific reference material | kriesel | kriesel | 5 | 2021-11-15 15:43 |
Mfakto-specific reference material | kriesel | kriesel | 5 | 2020-07-02 01:30 |
CUDALucas-specific reference material | kriesel | kriesel | 9 | 2020-05-28 23:32 |
CUDAPm1-specific reference material | kriesel | kriesel | 12 | 2019-08-12 15:51 |