| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 17
|
|
| Author |
|
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges:
|
Why would you write routines for CPUs first, if they can run 10x faster on GPUs?
---------------------------------------- |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Why would you write routines for CPUs first, if they can run 10x faster on GPUs? Somebody might actually write a GPU program first, despite the primitive debug tools available. I suspect that they would feel butterflies in their stomach unless there was an easy way to verify the GPU results. As we get more experienced GPU programmers, there will be more original programs written for GPUs. You will have an opportunity over the next few years to see how many programmers feel comfortable with GPUs. I predict a slow pickup, which is why I support writing parallel algorithms using BLAS, since they are relatively easy to convert to GPUs. Just my prediction. So far, GPU computing has not exploded. Looking at language development, I only expect slow growth for the next three years. Let's hope I am wrong. Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
As an open source development environment that's being used to simulate protein folding, it would be interesting to know whether our own Human Proteome Folding project is evaluating it as a way of accelerating their work.
|
||
|
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3315 Status: Offline Project Badges:
|
As an open source development environment that's being used to simulate protein folding, it would be interesting to know whether our own Human Proteome Folding project is evaluating it as a way of accelerating their work. Not from GPU's. The rosetta software isn't suitable for GPU computations. ![]() - AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W - AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W - AMD Ryzen 7 7730U 8C/16T 3.0 GHz |
||
|
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges:
|
Now, the subroutine library that I would push at every prospective project is BLAS (Basic Linear Algebra Subprograms). Algorithms written using this library exhibit all their potential parallelism, allowing easy compilation to use all the SSE instructions. And GPU programmers can (?easily?) decide whether or not to port the algorithm to GPUs. If more programs had been written using BLAS, we would see a lot more GPU programs today. Lawrence Does it matter whether they use OpenCL or CUDA from a WCG point of view? Since HCC used OpenCL, and people around here seem to favor open source, I had assumed that any subsequent projects would use it too, but I have not seen that stated anywhere. |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
Open source was not the determinant. It was access to a much larger pool of resources. If they wrote in CUDA, they only have nVidia cards. By writing in OpenCL, they have a much easier time writing apps that cover a much larger pool of resources.
----------------------------------------OpenCL = CPU, nVidia, and AMD/ATI. (and now even Intel at some projects) CUDA = nVidia only Edit: Read back through many older posts and you will find that HCC was originally trying CUDA before going OpenCL. ![]() ![]() ![]() ![]() [Edit 1 times, last edit by Coleslaw at May 31, 2013 5:17:00 PM] |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
Leave it up to zarck ;>) WCG is a project hoster... it does zero point zero with any science software, except fitting it into the BOINC wrapper if some research team proposes a project, they have the input libs, fits within the mission, can be serial / random distributed in problem slices, it's of sufficient size so it will last say six months [which of course when a tech manages to suddenly make the app run 4x faster such as happened to CFSW, may not even happen] and more, so does WCG 'support' any particular piece of simulation software... not until a project proposer requires it. edit: Now there popped out a name... Vijah Pande... is that not F@H? Since he has pretty much gone to a mass majority of the projects asking the same thing, here he points out how FAH did prefer it. http://registro.ibercivis.es/forum_thread.php?id=58 I'm not going to say it was a shameless/ful plug, because it could be yet another person really excited for GPU advancement. However, it would be nice if someone added more to their question then a simple link "plug" for the user to click on. How about explaining it and the advantages/disadvantages? Or possibly even stating what it would be used for? That to me is the shameful part of the post. ![]() ![]() ![]() ![]() ![]() |
||
|
|
|