Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 22
|
![]() |
Author |
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 161 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I'll upgrade my GPU the moment it was released. I mean, ohhh, the days and badges :-)
---------------------------------------- |
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, a GPU can be considerably faster and 250 times is not unheard of. Now for a reality check. This cannot be done in the real world. That is not correct, it HAS been done, read it agin [url]http://www.biomedcentral.com/1756-0500/4/97[/url] Or is it a false statement? I'm going to assume that you simply neglected to quote me in context due to over-zealousness. I would hesitate to accuse someone I don't know of intentional deception simply to win an internet argument. For completeness, allow me to produce that sentence in it's entirety to show that my point stands as originally stated. This cannot be done in the real world. It can be accomplished under ideal situations with a single or a few models of GPU. I never questioned that a GPU can perform at these speeds, I suggested that in the general population of hardware available in a distributed computing setting, this type of performance is impossible. Only a few or even a single card will fit the optimized code. The rest will be considerably slower if they are compatible at all. My argument was, and remains, that programming for a general release GPU application is far more difficult than it's proponents suggest. Looks like more than a few to me... Milkyway@Home requires a GPU supporting Double Precision arithmetic. NVIDIA: - Requires Compute Capability 1.3 and Above. - For the GeForce 2xx series, this is the GTX 260 and above. - Any Fermi based (GeForce GTX 4xx or 5xx) should support doubles. - Any older GPUs (such as a GeForce 8xxx or 9xxx) will not work AMD/ATI: - The oldest GPUs that work are the ATI HD Radeon 38x0 series. - In general laptop AMD GPUs do NOT support doubles despite similar branding (e.g. A Mobility Radeon 5870 is not the same as a normal Radeon 5870). Only the Mobility Radeon 48xx are the only current ATI laptop GPUs that have doubles. - For the Radeon 6000 series, only the 69xx have doubles. Examples (these lists are not all of the GPUs which should work) Nvidia: GeForce GTX 590 GeForce GTX 580 GeForce GTX 570 GeForce GTX 560 Ti GeForce GTX 560 GeForce GTX 550 Ti GeForce GT 545 GeForce GTX 480 GeForce GTX 470 GeForce GTX 465 GeForce GTX 460 GeForce GTS 450 GeForce GT 430 Geforce GTX 295 Geforce GTX 285 Geforce GTX 280 Geforce GTX 275 (credits to Bruce) Geforce GTX 260 Tesla S1070 Tesla C1060 Tesla M2090 Tesla M2070 Tesla M2050 Tesla S2050 Quadro Plex 2200 D2 Quadro FX 5800 Quadro FX 4800 Quadro 5000s, 5000s, 4000s (Based on GT200 GPU) AMD/ATI: AMD Radeon 6990 AMD Radeon 6970 AMD Radeon 6950 ATI HD Radeon 5970 (credits to kashi) ATI HD Radeon 5870 ATI HD Radeon 5850 ATI HD Radeon 5830 ATI HD Radeon 4890 ATI HD Radeon 4870 ATI HD Radeon 4850 ATI HD Radeon 4830 ATI HD Radeon 4770 ATI HD Radeon 4830 ATI HD Radeon 38x0 (credits to cenit for the AMD documentation describing the products above) ATI Firestream 9270 ATI Firestream 9250 ATI Firestream 9170 (credits to Cluster Physik) ---------------------------------------- [Edit 1 times, last edit by BladeD at Jul 17, 2011 5:43:08 PM] |
||
|
|
![]() |