| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 822
|
|
| Author |
|
|
JWustmann
Cruncher Joined: Mar 27, 2020 Post Count: 3 Status: Offline Project Badges:
|
Here (https://www.worldcommunitygrid.org/about_us/viewNewsArticle.do?articleId=713) it is stated, that we actually have a large backlog on GPU Work?
Current status of work units CPU Available for download: 6,063 batches In progress: 2,199 batches Completed: 51,189 batches 6,770 batches in the last 30 days Average of 225.7 batches per day Estimated backlog: 26.9 days GPU Available for download: 16,283 batches In progress: 4,174 batches Completed: 52,292 batches 15,366 batches in the last 30 days Average of 512.2 per day Estimated backlog: 31.8 days I like the notion, that cpu work can be allocated to other projects like cancer, because the GPU efficiency is so much higher for solving open pandemics. I dont know where the bottleneck is, but i´d rather end the pandemic sonner than farm CPU time for "fairness of contribution reasons". So if the bottleneck is nontechnical, please geht rid of it. Doing inefficient crunching while fficient is possible is like heating your home in winter by burning paper money. Every CPU cycle, that can be spent on "CPU only projects" should optimize the output of the grid as a whole |
||
|
|
Unixchick
Veteran Cruncher Joined: Apr 16, 2020 Post Count: 1314 Status: Offline Project Badges:
|
I too was surprised to see the backlog in GPU work. The test did provide a limit of what the system can handle sending out and returning, but I don't think we are anywhere near that. They should increase how many they send out every 30 minutes and whittle away that backlog in gpu. I'm betting a little bit of a backlog is good, as if they run out then people will get cranky, but 31 days is a large backlog.
|
||
|
|
Acibant
Advanced Cruncher USA Joined: Apr 15, 2020 Post Count: 126 Status: Offline Project Badges:
|
While they do have quite a few work units built up to last a while, we've been told the researchers themselves can only process so many results per day. So the bottleneck is very much human in nature.
----------------------------------------![]() |
||
|
|
yoerik
Senior Cruncher Canada Joined: Mar 24, 2020 Post Count: 413 Status: Offline Project Badges:
|
While they do have quite a few work units built up to last a while, we've been told the researchers themselves can only process so many results per day. So the bottleneck is very much human in nature. ^ as outlined here: https://www.worldcommunitygrid.org/forums/wcg...ead,43504_offset,0#659962 "running OpenPandemics at a higher speed will cause the research team to focus the majority of their time and energy on preparing input data sets and archiving returned data rather than performing analysis of the results and moving the interesting results to the next step in the pipeline. As a result, the project will remain at its current speed for the foreseeable future." the #1 way to increase the GPU rate, is to reduce the WUs and batches going out to CPUs. Hence there's been appeals from WCG to turn on MCM, so that CPU capacity doesn't disappear - and lost to the grid. ![]() |
||
|
|
Grumpy Swede
Master Cruncher Svíþjóð Joined: Apr 10, 2020 Post Count: 2550 Status: Offline Project Badges:
|
Hehe, so few GPU WU's, that I can't even keep my GTX660M fed. It takes a couple of hours to crunch these "new" GPU WU's, but that isn't enough to get new ones while crunching. I even run CPU tasks on the same computer, so there isn't any long back-offs in between requests either.
Why spend time and money developing a GPU app, when it's not used much at all? Ah well, I save electricity at least. |
||
|
|
yoerik
Senior Cruncher Canada Joined: Mar 24, 2020 Post Count: 413 Status: Offline Project Badges:
|
Hehe, so few GPU WU's, that I can't even keep my GTX660M fed. It takes a couple of hours to crunch these "new" GPU WU's, but that isn't enough to get new ones while crunching. I even run CPU tasks on the same computer, so there isn't any long back-offs in between requests either. Why spend time and money developing a GPU app, when it's not used much at all? Ah well, I save electricity at least. "running OpenPandemics at a higher speed will cause the research team to focus the majority of their time and energy on preparing input data sets and archiving returned data rather than performing analysis of the results and moving the interesting results to the next step in the pipeline. As a result, the project will remain at its current speed for the foreseeable future." -https://www.worldcommunitygrid.org/forums/wcg...ead,43504_offset,0#659962 ![]() |
||
|
|
erich56
Senior Cruncher Austria Joined: Feb 24, 2007 Post Count: 300 Status: Offline Project Badges:
|
Why spend time and money developing a GPU app, when it's not used much at all? that's exactly what I, too, am thinking all the time ![]() |
||
|
|
Robokapp
Senior Cruncher Joined: Feb 6, 2012 Post Count: 264 Status: Offline Project Badges:
|
the fact that we eat the tasks up as fast as they make them is a good thing.
it means the bottleneck is not the hard part. as far as science is concerned, this is ideal. |
||
|
|
adamradocz
Cruncher Joined: Mar 20, 2014 Post Count: 12 Status: Offline Project Badges:
|
I think if the GPUs are capable to handle the throughput needed for this project, then the CPU time shouldn't be wasted for this project and the CPUs should be allocated to projects that are CPU only.
|
||
|
|
bicotz
Advanced Cruncher Canada Joined: Apr 25, 2010 Post Count: 67 Status: Offline Project Badges:
|
I think if the GPUs are capable to handle the throughput needed for this project, then the CPU time shouldn't be wasted for this project and the CPUs should be allocated to projects that are CPU only. If GPU results do the exact same science as the CPU, I agree with you. ![]() |
||
|
|
|