View Single Post
Old 2018-06-03, 15:17   #3
kriesel's Avatar
Mar 2017
US midwest

5,023 Posts
Default Drivers and gpus trivia / traps / tricks

These are mostly from Windows experience.
  1. AMD and NVIDIA gpus installed in the same system can be very problematic. There is a way to get them to coexist. Segregating them to separate systems seems simpler and more robust.
  2. A failed graphics driver install can create a lingering mess/problem. Thorough file deletion and registry editing after removing a driver with the vendor-supplied tools and Add/remove program, or use of DDU may be required. Or use the "Clean Install" option.
  3. NVIDIA allows only one NVIDIA graphics driver installed per system. The driver must support all the installed NVIDIA cards. Older GPUs get dropped from support as newer GPUs come along and require newer drivers. The really old GPUs may need to be segregated to a system that is not automatically getting driver updates. There is a relationship between driver version, minimum and maximum CUDA level supported, and compute capability minimum and maximum supported, and therefore gpu models supported. See for more on this. (Eventually old GPUs become uneconomic to operate, as newer GPUs become available that are more energy efficient. Or they fail before then or are replaced with faster hardware.)
  4. Installing the AMD or CUDA SDKs on a system can disable the OpenCL driver that was allowing the Intel IGP to run Mfakto until then.
  5. Some systems by design disable the IGP when a discrete GPU is installed, so the IGP can not be used for computation or display in that case. (Dell Optiplex 755 Core 2 Duo was an example)
  6. The Linux nouveaux driver installs by default for NVIDIA GPUs, and prevents installation of the NVIDIA driver needed for CUDA computing. The nouveaux puts up a pretty good fight, at least on the Debian version I tried. Supposedly it can be defeated by blacklisting it.
  7. Mersenne code that uses multiple GPUs working together to process a single worktodo entry does not exist in the GIMPS community, to my knowledge. (Prime95 has this capability on cpu cores.) Physically linking GPUs with NVIDIA SLI or AMD Crossfire means multiple GPUs work together sharing the memory installed on one while the other's is idle. As fast as those interconnects are, they are slow compared to on-board memory bandwidth. For P-1 especially, and also in primality testing high exponents, lots of memory is a plus, so that loss of available gpu memory would be a drawback. Throughput is better to have individual GPUs working each with their own full complement of memory, on separate assignments, via separate program instances. (clarified with SELROC's input.)
  8. PCIe extenders can be used. Test well for reliability.
    Powered extenders are recommended, non-powered extenders are not.
    Extenders have a power limit of about 60 Watts. Beyond that additional gpu power plugs are required.
    Extenders are very common in mining the various types of digital coin.
    Bus load for most gpu mersenne code is quite light, so using a 1x pcie interface is not much of a limit on throughput.
  9. Some systems won't make use of a gpu connected by PCI slot via PCIe/PCI adapter if there's already a PCIe-connected gpu present. The adapter and gpu won't even be detected as present by Windows, appearing to be not functional.
  10. The same adapter and external GPU that's ignored in the preceding can be used on a system that has PCI but no PCIe slots or other discrete GPUs (and also takes over display duties there from its IGP).

Top of reference tree:

Last fiddled with by kriesel on 2020-07-16 at 18:44
kriesel is online now