| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 822
|
|
| Author |
|
|
kittyman
Advanced Cruncher Joined: May 14, 2020 Post Count: 140 Status: Offline Project Badges:
|
The kitties try not to complain too loudly......
----------------------------------------They of course have the CPUs at work full time. But as long as everybody's CPUs are filled, why not divert more work to the army of GPUs that are just waiting to jump on it? Or, perhaps the project is processing as much work through the back end as they can handle? If not, please send more GPU kibbles to the kitties. The wretched COVID virus nearly took the life of my significant other, and I am anxious to do everything here that might reveal it's mysteries and give mankind the tools to bring it down. Meow ![]() |
||
|
|
Speedy51
Veteran Cruncher New Zealand Joined: Nov 4, 2005 Post Count: 1326 Status: Offline Project Badges:
|
The kitties try not to complain too loudly...... They of course have the CPUs at work full time. But as long as everybody's CPUs are filled, why not divert more work to the army of GPUs that are just waiting to jump on it? Or, perhaps the project is processing as much work through the back end as they can handle? Meow It seems you have logically nail on the head. I have a feeling they are processing as much work as they can handle or the scientists may not be able to build batches fast enough. Not sure this is just a thought ![]() [Edit 1 times, last edit by Speedy51 at May 18, 2021 11:00:29 PM] |
||
|
|
kwolff88
Cruncher Joined: Dec 31, 2004 Post Count: 19 Status: Offline Project Badges:
|
I get the feeling that even if they converted all CPU workunits to GPU, it wouldn't be enough to keep every GPU's work cache full all the time.
|
||
|
|
Jim1348
Veteran Cruncher USA Joined: Jul 13, 2009 Post Count: 1066 Status: Offline Project Badges:
|
But as long as everybody's CPUs are filled, why not divert more work to the army of GPUs that are just waiting to jump on it? Uplinger explained earlier that they would use the CPUs for the small molecules, and the GPUs for the bigger ones. I would think they have much more GPU power than they planned on, and are trying to figure out how to get enough large molecules to make use of it. They may also have to develop new techniques to analyze the results. I am slightly perplexed why this was not realized six months ago, but that is not our realm. Ours not to reason why, ours but to do and die. |
||
|
|
maeax
Advanced Cruncher Joined: May 2, 2007 Post Count: 144 Status: Offline Project Badges:
|
Feel free to do other GPU-work with Boinc so long no work is avalaible for COVID19!
----------------------------------------
AMD Ryzen Threadripper PRO 3995WX 64-Cores/ AMD Radeon (TM) Pro W6600. OS Win11pro
|
||
|
|
kittyman
Advanced Cruncher Joined: May 14, 2020 Post Count: 140 Status: Offline Project Badges:
|
The kitties would like to see more steady GPU kibbles coming from WCG too. But they are happily meowing through the bits that they are sent.
----------------------------------------Meow! ![]() |
||
|
|
erich56
Senior Cruncher Austria Joined: Feb 24, 2007 Post Count: 300 Status: Offline Project Badges:
|
The WUs from the OPNG_0041... batch are really strange. They take a lot longer than all the other ones so far, and most of the time the GPU stays un-utilized.
Is this by design? Or is something wrong with them? |
||
|
|
Ian-n-Steve C.
Senior Cruncher United States Joined: May 15, 2020 Post Count: 180 Status: Offline Project Badges:
|
The WUs from the OPNG_0041... batch are really strange. They take a lot longer than all the other ones so far, and most of the time the GPU stays un-utilized. Is this by design? Or is something wrong with them? I noticed this during the stress test also. the vast majority of the runtime difference comes down to the AutoGrid run time. on the older tasks it only takes a few seconds. on these new tasks, AutoGrid takes several minutes. This appears to be a CPU-based setup step before anything runs on the GPU, so the GPU will be idle during this time. so far no one has explained the reason for this. ![]() EPYC 7V12 / [5] RTX A4000 EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060 [2] EPYC 7642 / [2] RTX 2080Ti |
||
|
|
pututu
Senior Cruncher United States Joined: Jan 3, 2016 Post Count: 243 Status: Offline Project Badges:
|
Anyone getting more gpu tasks than usual? Feels like there are more than 90,000 tasks been deployed per day. Just a hunch. Maybe another round of gpu stress test
----------------------------------------![]() Since the tasks take longer time to finish, maybe more are available for everyone? ![]() [Edit 1 times, last edit by pututu at May 20, 2021 5:55:45 PM] |
||
|
|
erich56
Senior Cruncher Austria Joined: Feb 24, 2007 Post Count: 300 Status: Offline Project Badges:
|
Anyone getting more gpu tasks than usual? Feels like there are more than 90,000 tasks been deployed per day. Just a hunch. Maybe another round of gpu stress test ![]() the strange thing is: my PCs with the smaller GPUs get quite a lot of WUs, and the PC with 2 high-end GPUs inside barely gets any :-( |
||
|
|
|