| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 5
|
|
| Author |
|
|
AMD @ amdforum.se
Cruncher Joined: Oct 29, 2008 Post Count: 2 Status: Offline |
When will we be able to fold via GPU too? CPU is getting better and better with more cores, but still GPU folding is much more effective.
|
||
|
|
Randzo
Senior Cruncher Slovakia Joined: Jan 10, 2008 Post Count: 339 Status: Offline Project Badges:
|
This topic has been discused hundred times. Please visit previous forums discussing this topic.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
When will we be able to fold via GPU too? CPU is getting better and better with more cores, but still GPU folding is much more effective. More effective? Hmm Read this Faster does not mean they are more effective. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
When will we be able to fold via GPU too? CPU is getting better and better with more cores, but still GPU folding is much more effective. More effective? Hmm Read this Faster does not mean they are more effective. Also this (ht to Sek) http://www.vpac.org/files/OptimizingAutodockwithCUDA.pdf The overhead of all CUDA IEC and TI memory transfers when running with 2037 atoms or less is greater than the processing time for executing both the IEC and TI kernels. This is due to the number of instructions executed by each thread, which is too few for the cost of the memory transfer overhead. Therefore, even with the maximum number of threads executing in AutoDock, the current CUDA IEC and TI functions do not accelerate the LGA and therefore AutoDock. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If you want to contribute to GPU/x64 development please see the thread:
https://secure.worldcommunitygrid.org/forums/...ad,29418_offset,20#288494 |
||
|
|
|