| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 7
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hi,
would it be not better to use my NVidia GeForce GPU with Shader to make floating points calculations? I don't have seen the source code of any of the BOINC clients / agents, but I think it could be interessting for some operations to use the GPUs instead the CPU. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We've covered this before.
The bottom line is, it can't be done. If you want a detailed discussion, try searching the forums. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
There has been some work done on using GPUs for useful scientific calculations, can't remember the name of the project right now though. It is true they are somewhat limited in what they can do but as cards get more and more programmable it may become more feasible, not sure what DX10 feature set will bring in regards to this type of work, maybe that will improve things.
One thing that did occur to me the other day is what about using a physics card for some scientific work, might be useful for something like atomic level modelling for HPF, seeing as they are designed to calculate the interactions between large numbers of interacting objects. I suspect the physic in a current PPU are probably too simplistic for anything useful at this stage but don't know if anyone has sat down and looked at it yet. Sounds like another area that may show promise in time though. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
There is a thread about using the GPU over on Folding@Home ( http://folding.stanford.edu/ ). It begins in 2002. In 2004 they began coding. I have not looked at the thread since Fall 2005. They signed a nondisclosure agreement with the GPU companies in order to receive aid from them, so they were not saying much.
Summing it up: A) It isn't easy or fast to program for a GPU. B) The project scientists would have to do the programming. It is not something that could be handed off to the WCG. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I just kinda jumped in after briefly reading the posts. Seems like deja vu.
I thought somewhere on here at some point in the past this was discussed...maybe it was on another distributed computing website. Anyway, here is the link I remembered from that discussion. http://www.gpgpu.org/ Believe there is a powerpoint slide explaining the theory and usage. "GPGPU stands for General-Purpose computation on GPUs. With the increasing programmability of commodity graphics processing units (GPUs), these chips are capable of performing more than the specific graphics computations for which they were designed. They are now capable coprocessors, and their high speed makes them useful for a variety of applications. The goal of this page is to catalog the current and historical use of GPUs for general-purpose computation. " |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Here is the link for the power point presentation on "Linear Algebra on GPUs".
http://www.gpgpu.org/s2004/slides/krueger.LinearAlgebra.slides.ppt Hope this helps clear things up. |
||
|
|
SlumLord
Cruncher Joined: Jan 23, 2006 Post Count: 2 Status: Offline Project Badges:
|
Here is the Forum on Folding@Home about using GPU's to fold.
Also the Folding@Home FAQ has a recent update about their work on GPU's. |
||
|
|
|