Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 7
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
;
The FOLDING@HOME has a project to explore using the GPU to crunch work units. Is there a similar effort in that direction here in WCGrid? ; |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
There is a lot I could say about the difficulty of programming the GPU, but I think that Folding@Home has a pretty complete thread about that. GPU programming is an application issue that would have to be tackled by the application developer. If somebody ever develops a GPU application and approaches us, we could then wrestle with the problem of boarding it.
What a nightmare that could turn into. ![]() But interesting. ![]() Lawrence |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Interesting indeed.
Fast forward a bit: can we roughly, broadly, say that--given what NVIDIA/ATI claims about the power of its GPU/VPU, and infering from that--the total power made available thus would be the sum of the power from the CPU and the GPU/VPU? Given further that the GPU/VPU has more power than the CPU, the idea of tapping the power of what normally goes typically into video gaming and into distributed computing, is something we simply cannot ignore. Given further, finally, that better than 50% of all desktops probably has a GPU, and one can now just imagine the untapped power right there. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We really can't. While it is possible, the amount of computing power isn't as huge as advertised. Graphics is a specialised subject, with specialised hardware. Some operations are very fast, and others slow or not available at all.
All in all, the effort outweighs the benefits for the majority of projects, unless somebody creates a "Virtual GPU PC", and even that will only reduce the effort required, and won't give the massive boost you are looking for. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I ran across an interesting case recently. Ordinarily, Windows 64 speeds up programs about 10% - 15% because recompiling for 64-bits lets the compiler use a second bank of registers. But one application program used links to get all the data. Since it had to load 64-bit pointers rather than 32-bit pointers, the application program ran slower in 64-bit mode than it did in 32-bit mode.
I am more interested in the 2x FPU chips being designed. With 2 floating point units per core, I suspect that scientific programs will really speed up. The ultimate limit is memory bandwidth per chip, but we are not there yet. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I still doubt that. GPU application need to be designed in such a totally different way that the porting effort will not only require a complete rewrite, it will also require a complete redesign. Then, there are more serious problems:
Most GPUs do not meet the IEEE floating point specifications. They don't need to - "almost" is close enough for a pretty picture, and accuracy is sacrificed for speed. Many GPUs do not support 32bit operations throughout. You hear wild claims about 64bit and 128bit processors (and more), but it's not where it counts. As far as I know, no GPU can do floating point operations on 64bit numbers. Some science applications need this precision. Taking all this into account (plus the wildly varying architectures of different chips) there's little to be gained even thinking about it at this point. This may change, graphics chips are becoming more general purpose, but it will be several generations before they are a practical proposition (and they may never be - specialisation can be good, and there is really no pressure on the graphics companies to produce a general purpose chip - quite the opposite, really). |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
lawrencehardin...
So, it seems to me that the prospects for using the GPU to crunch work units appear rather nebulous at best at this point. But join me if you will, and wish FOLDING@HOME success in its effort to tap the power of the GPU. We may learn a thing or two just by watching them. I know they will succeed and I invite WCGrid to come join the winners. Let us all pull our heads together and find the way to use the power of that GPU in distributed computing. |
||
|
|
![]() |