mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2008-06-19, 11:57   #12
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2·1,877 Posts
Default

I refer to GPGPU.ORG to get a feel for GPU applications.
General-Purpose Computation Using Graphics Hardware: http://www.gpgpu.org/
Quote:
GPGPU stands for General-Purpose computation on GPUs. With the increasing programmability of commodity graphics processing units (GPUs), these chips are capable of performing more than the specific graphics computations for which they were designed. They are now capable coprocessors, and their high speed makes them useful for a variety of applications. The goal of this page is to catalog the current and historical use of GPUs for general-purpose computation.
Of the 326 entries categorized, 78 are in scientific computing. A few of the papers relate to extending precision to wider floating point numbers but most of them were from 2005 and 2006. This site provides a general feel of the ways GPUs are being used and is worth a visit.
only_human is offline   Reply With Quote
Old 2008-06-22, 03:36   #13
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Wired chimes in:

"Supercomputing Power Hits the Desktop, Minus the Software"

http://www.wired.com/techbiz/it/news/2008/06/gpu_power

Quote:
Originally Posted by Bryan Gardiner
The PC industry's two largest graphics companies released new top-of-the-line models this week.

...

AMD, which acquired graphics maker ATI in 2006, released two new chips, the Radeon HD 4850 and the Radeon HD 4870. Nvidia, the other dominant player in the space, unveiled its new GeForce GTX 260 and GeForce GTX 280 processors.

...

Indeed, cheap access to such formidable computing power could mean that, over the next few years, we will see an explosion of new independent research along with profound new discoveries, analysts say. Additionally, new consumer applications will be able to draw on the graphics processing unit (GPU) for even more eye-watering special effects and even occasionally useful visual information.

"We'll start to get things like real-time mapping from Google that incorporates all manner of real world information," says Bob O'Donnell, an analyst at IDC. "All of this is going to bubble up more and more."

...

Just last week, Khronos, the industry consortium behind the OpenGL standard, announced what it calls Open Computing Language, or OpenCL. With this new heterogeneous computing initiative, the group hopes to come up with a standardized (and universal) way of programming parallel computing tasks.

In many ways, it's the Holy Grail developers have been waiting for: a hardware-agnostic standard that unleashes the power of multi-core CPUs and GPUs using a familiar language.

Apple is throwing its weight behind parallel processing too, and last week committed to using the OpenCL specification as part of its next operating system release, Snow Leopard.

Other companies, including AMD, Nvidia, ARM, Freescale, IBM, Imagination, Nokia, Motorola, Qualcomm, Samsung and Texas Instruments have joined the OpenCL working group.

If initiatives like OpenCL gain momentum, the days of researchers applying for grants and traveling across the country to use a given university or research facility's super computer may well be at an end. Similarly, distributed computing projects like Folding@Home and Seti@Home may see an huge boost in performance by using hundreds of thousand of computers equipped with these new powerful processors.

Last fiddled with by cheesehead on 2008-06-22 at 03:45
cheesehead is offline   Reply With Quote
Old 2008-06-23, 15:43   #14
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

10,949 Posts
Default GPU for other than LL testing

(Mod: If this post is off-topic, please tell me and delete it.)

I don't wish to re-open the double-precision on GPU discussion. However, I would like to hear from anyone who has managed to get any kind of GPGPU running on their system as I'd like to experiment with various CNT applications.

I scrounged a nVidia Quadro FX1700 at the weekend and plugged it into my system --- an emachines 6260 (don't you just love these bizarre cApitalization and font sChemes used nOwadays?). The system is a dirt-cheap AMD64-3500+ machine with 2.5G memory, built-in ATI Radeon Express 200 graphics and a PCI-E slot. It takes the card no problem and when a DVI cable connects it to a monitor the machine boots also without problem --- other than the nVidia driver isn't built in to the Gentoo kernel so everything is strictly text mode. That's just fine, because I don't want to run a display on the card anyway.

However, I've not yet found a way to disable the card in the BIOS and so to continue to use the on-board ATI with its VGA connector. Emachines' and Gateway's web sites (Gateway took over emachines a while back) and tech-support lines have been utterly useless so far.

Anyone have any advice and/or suggestions?

Paul
xilman is offline   Reply With Quote
Old 2008-06-23, 22:56   #15
dsouza123
 
dsouza123's Avatar
 
Sep 2002

2·331 Posts
Default

Flash the BIOS

Using a utility like CPU-Z, find out exactly what motherboard the PC has
and check if the motherboard maker has a BIOS available that gives
you more bios options.

There may also be jumpers that can enable/disable the onboard video.

Last fiddled with by dsouza123 on 2008-06-23 at 23:00
dsouza123 is offline   Reply With Quote
Old 2008-06-23, 23:18   #16
IronBits
I ♥ BOINC!
 
IronBits's Avatar
 
Oct 2002
Glendale, AZ. (USA)

3·7·53 Posts
Default

Sometimes in the BIOS there is a boot this first: Onboard/PCI or AGP/PCI etc.
IronBits is offline   Reply With Quote
Old 2008-06-24, 00:24   #17
dsouza123
 
dsouza123's Avatar
 
Sep 2002

2×331 Posts
Default

Intel is going to present a paper on Larrabee at SIGGRAPH 2008

http://www.siggraph.org/s2008/attend...e=papers&id=34

Larrabee: A Many-Core x86 Architecture for Visual Computing
This paper introduces the Larrabee a many-core hardware architecture, a new software rendering pipeline, a many-core programming model, and performance analysis for several applications. Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as fixed-function co-processors. This provides dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads and greatly increases the flexibility and programmability of the architecture as compared to standard GPUs.
dsouza123 is offline   Reply With Quote
Old 2008-06-26, 11:06   #18
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2×1,877 Posts
Default

Tom's Hardware has a nice article: Tom's Hardware > Articles > Components > Graphics Cards > Nvidia's CUDA: The End of the CPU? One user comment points out that development is also possible through emulation:
Quote:
You don't get the satisfaction of cool speedups, but it's just as educational, and easier to debug. No screen flickers
only_human is offline   Reply With Quote
Old 2008-06-26, 17:37   #19
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

Quote:
Originally Posted by only_human View Post
development is also possible through emulation:
Emulation is the sincerest once-removed form of flattery.
cheesehead is offline   Reply With Quote
Old 2008-08-31, 13:03   #20
dsouza123
 
dsouza123's Avatar
 
Sep 2002

2×331 Posts
Default

An article in Dr Dobb's Journal from a former scientist at Los Alamos National
Laboratory, who had worked with massively parallel systems is now using CUDA.

Quote:
One of my production codes, now written in CUDA and running on NVIDIA GPUs, shows both linear scaling and a nearly two orders of magnitude speed increase over a 2.6-Ghz quad-core Opteron system.
Quote:
Getting started costs nothing and is as easy as downloading CUDA from the CUDA Zone homepage (look for "Get CUDA"). After that, follow the installation instructions for your particular operating system. You don't even need a graphics processor because you can start working right away by using the software emulator to run on your current laptop or workstation. Of course, much better performance will be achieved by running with a CUDA-enabled GPU. Perhaps your computer already has one. Check out the "CUDA-enabled GPUs" link on the CUDA Zone homepage to see. (A CUDA-enabled GPU includes shared on-chip memory and thread management.)
http://www.ddj.com/hpc-high-performa...ting/207200659
dsouza123 is offline   Reply With Quote
Old 2008-09-01, 00:42   #21
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Quote:
Originally Posted by dsouza123 View Post
Quote:
CUDA Zone homepage (look for "Get CUDA").
Ahem. Where-to-look may have changed.

I recommend: Click on "DOWNLOAD CUDA" at the top of the CUDA homepage. You'll reach http://www.nvidia.com/object/cuda_get.html. Follow the "DOWNLOAD AND INSTALLATION TIPS" there.

(If, instead, you click on "Get CUDA" under DOWNLOADS on the left side of the cuda-get.html page, you'll just do a spin-in-place to the same cuda-get.html page, and eventually decide to follow the "DOWNLOAD AND INSTALLATION TIPS" anyway.)

Oh ... if you are downloading CUDA 2.0 for Windows XP, better have 150MB free disk space for the three parts (70 MB, 20 MB, 60 MB). And if you're using a dial-up link, divide 150MB by (24 * 3600 * (your actual average baud rate -- be honest)/10) to see how many days it'll take if nothing goes wrong.

Last fiddled with by cheesehead on 2008-09-01 at 00:55
cheesehead is offline   Reply With Quote
Old 2008-09-01, 10:54   #22
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

22·11·97 Posts
Default

Quote:
Originally Posted by cheesehead View Post
And if you're using a dial-up link, divide 150MB by (24 * 3600 * (your actual average baud rate -- be honest)/10) to see how many days it'll take if nothing goes wrong.
I think it'd be closer to:
150*1024/(your actual download speed in KB/sec)/3600
or the equivalent:
150*1024*1024/(your actual download speed in B/sec)/3600

For 5 KB/sec that's ~8.5 hours.
Mini-Geek is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
GIMPS/primeNet CPU overview refresh rate Meral Harbes Information & Answers 12 2014-12-03 11:45
Gimps/PrimeNet down? markg Information & Answers 4 2010-07-27 22:34
GIMPS vs. Primenet Stats petrw1 PrimeNet 11 2007-06-08 06:00
Graph over time of what kind of machines run GIMPS (PrimeNet) GP2 Hardware 18 2003-12-17 15:30
GIMPS PrimeNet / www.mersenne.org server relocation Old man PrimeNet PrimeNet 25 2003-04-23 20:24

All times are UTC. The time now is 17:20.


Sun Oct 17 17:20:06 UTC 2021 up 86 days, 11:49, 1 user, load averages: 1.76, 2.00, 1.96

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.