| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 77
|
|
| Author |
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
1. no, not to use multi-thread CPU...single CPU with multi-thread GPU per WU!
----------------------------------------2. well, you can check data on familiar GPUgrid.org...or check some science on SETi@home or Einstein@home? ;) |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
1. no, not to use multi-thread CPU...single CPU with multi-thread GPU per WU! 2. well, you can check data on familiar GPUgrid.org...or check some science on SETi@home or Einstein@home? ;) 1. Not as the result of the translation. However, if the source code uses a multi-thread CPU method (even one confined to run all threads on the same CPU core) I'm not familiar with, I am unlikely to understand it well enough to produce the multi-thread CPU version. 2. It will take some time to gather enough information on those. [Edit 1 times, last edit by robertmiles at Oct 26, 2016 12:13:15 AM] |
||
|
|
Mark100
Cruncher Joined: Mar 3, 2007 Post Count: 13 Status: Offline Project Badges:
|
So, are there chances the WCG-projects will be fit for using GPU's ? Are they on it ?
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
You tell us, which project could be benefitting from parallel processing what needs doing in serial order. (if parallel processing on 8 threads leads to 90 percent waitstate on 7 threads, how much work more / less you think can be done compared to 8 single threaded jobs?).
|
||
|
|
Mark100
Cruncher Joined: Mar 3, 2007 Post Count: 13 Status: Offline Project Badges:
|
You tell us, which project could be benefitting from parallel processing what needs doing in serial order. (if parallel processing on 8 threads leads to 90 percent waitstate on 7 threads, how much work more / less you think can be done compared to 8 single threaded jobs?). Er. Do you talk to me ? |
||
|
|
gb009761
Master Cruncher Scotland Joined: Apr 6, 2005 Post Count: 3010 Status: Offline Project Badges:
|
You tell us, which project could be benefitting from parallel processing what needs doing in serial order. (if parallel processing on 8 threads leads to 90 percent waitstate on 7 threads, how much work more / less you think can be done compared to 8 single threaded jobs?). Er. Do you talk to me ? Mark100, what SekeRob is saying, is that not all projects would benefit from GPU crunching, and that whilst WCG are ready and willing to host/run such a project, it's down to the scientists/projects to get their research GPU ready. Sure, I'm confident WCG would help get a GPU project on board, but if there's no such projects around, there's not a lot WCG can do to force/invent such a project. There is also the added factor that not all projects are geared up/capable of receiving/storing/processing such an influx of results "GPUing" a project, just to get it through this stage of the research phase. Try doing a search on "GPU", and then you'll see numerous threads on this topic. ![]() [Edit 2 times, last edit by gb009761 at Mar 20, 2017 8:55:54 AM] |
||
|
|
Mark100
Cruncher Joined: Mar 3, 2007 Post Count: 13 Status: Offline Project Badges:
|
Thanks for the information. i asked because its a big difference to need 1.5 hours for a WU, or 3 minutes.
To know that WU's take the 30-fold of what could be necessary, only because its to fast for the projects to process is a bit sorry. |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
Thanks for the information. i asked because its a big difference to need 1.5 hours for a WU, or 3 minutes. To know that WU's take the 30-fold of what could be necessary, only because its to fast for the projects to process is a bit sorry. I've studied GPU programming enough to know that running an unsuitable program on a GPU instead of on a CPU can be as as slow as a quarter of the speed on a CPU. The more suitable programs can be as fast as 750 times as fast on the GPU as on the CPU, but only 10 times as fast is more typical. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The one and only GPGPU project we had, in late stages, was HCC and that processed about 18-20x faster than the CPU version, and to slow things down, the techs packaged multiple jobs in one task, to reduce the server scheduler i/o... the feeder would not have been able to keep up. Those jobs ran about 2/3rd on the GPGPU and 1/3rd on the CPU and think you could get multiple running concurrent, AMD OpenCL that is, the NVidia cards were not that dandy with that.
The HCC project was, yes, image processing, which is what GPGPUs are good at. MCM could have been another, such was the talk in the early stages, but that went dead. Anyway, good to see confirmation about the facts and fiction differential... it's far from a holy grail, one day the integrated APU (is that what it's called?) just loading off pieces to the GPU part when suitable, which I think to have read Windows 10 is already doing. |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
You tell us, which project could be benefitting from parallel processing what needs doing in serial order. (if parallel processing on 8 threads leads to 90 percent waitstate on 7 threads, how much work more / less you think can be done compared to 8 single threaded jobs?). Er. Do you talk to me ? Don't mind him. He's usually like that! ![]() Anyway...WCG hasn't got a project, 'cause scientist are not working with the software it was developed, like the one they have on GPUgrid...so, if you fancy using yours GPUs also, for human science - go there! If you fancy something else, there's also SETi@home & Einstein@home for "star lovers"... Many other informations you can find here: https://boinc.berkeley.edu/wiki/GPU_computing ![]() |
||
|
|
|