Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 781
|
![]() |
Author |
|
wildhagen
Veteran Cruncher The Netherlands Joined: Jun 5, 2009 Post Count: 830 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I get a few workunits for GPU sometimes, but only 1 or 2 at a time. Almost all are reruns (_2), with some _1's in between.
----------------------------------------[Edit 1 times, last edit by wildhagen at May 3, 2021 7:02:17 AM] |
||
|
hnapel
Advanced Cruncher Netherlands Joined: Nov 17, 2004 Post Count: 82 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My slowest machine is munching the last GPU jobs from its cache, is it over? In that case it was epic and good as long as it lasted.
|
||
|
Richard Haselgrove
Senior Cruncher United Kingdom Joined: Feb 19, 2021 Post Count: 360 Status: Offline Project Badges: ![]() ![]() |
Got an error on a 'Server abort':
03/05/2021 09:05:10 | World Community Grid | [cpu_sched] Preempting OPNG_0022502_00155_2 (removed from memory) Task only ran for 10 seconds on a slow iGPU: it was well within the initial CPU setup phase when it received the abort. Does the setup phase handle BOINC API calls properly?03/05/2021 09:07:14 | World Community Grid | [sched_op] handle_scheduler_reply(): got ack for task OPNG_0022502_00155_2 03/05/2021 09:07:14 | World Community Grid | [error] garbage_collect(); still have active task for acked result OPNG_0022502_00155_2; state 0 03/05/2021 09:07:15 | World Community Grid | Output file OPNG_0022502_00155_2_r1155594928_0 for task OPNG_0022502_00155_2 absent 03/05/2021 09:07:15 | World Community Grid | Output file OPNG_0022502_00155_2_r1155594928_1 for task OPNG_0022502_00155_2 absent |
||
|
biini
Senior Cruncher Finland Joined: Jan 25, 2007 Post Count: 334 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GPU WUs started to come in again
---------------------------------------- |
||
|
Grumpy Swede
Master Cruncher Svíþjóð Joined: Apr 10, 2020 Post Count: 2165 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
And just now, I got a refill of GPU tasks, and the whole WCG web site, as well as the BOINC part of WCG just drowned in molasses.
----------------------------------------Backoffs, and slow downloads are with us again of course. [Edit 1 times, last edit by Grumpy Swede at May 3, 2021 8:34:25 AM] |
||
|
bozz4science
Advanced Cruncher Germany Joined: May 3, 2020 Post Count: 104 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
My cache has been empty for hours. No new work gets dispatched to my system unfortunately. Discovered this morning that my PC had a cache of nearly 1,000 OPN1 WUs, while being on a 1 + 0.1 day cache setting. Surely, I had to abort most of them, as my 8 core machine only can handle so much and I'd prefer to crunch OPN tasks on my GPUs anyway.
----------------------------------------Hope that we'll see a flow of new GPU work soon. ![]() AMD Ryzen 3700X @ 4.0 GHz / GTX1660S Intel i5-4278U CPU @ 2.60GHz [Edit 1 times, last edit by bozz4science at May 3, 2021 8:56:17 AM] |
||
|
hnapel
Advanced Cruncher Netherlands Joined: Nov 17, 2004 Post Count: 82 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
They really need to start feeding new GPU batches more slowly in a ramp-up fashion, now that there are new jobs they will not properly download.
|
||
|
Skivelitis2
Advanced Cruncher USA Joined: Mar 21, 2015 Post Count: 113 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Let's remember....this is a stress test.
----------------------------------------![]() |
||
|
tux93
Cruncher Germany Joined: Jan 5, 2012 Post Count: 9 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Let's remember....this is a stress test. Reading this thread I'm not sure who's stressed more: The infra or the volunteers xD ![]() Primary: Intel i7-4790 + nVidia GTX 1060 Secondary: Intel i7-2600 + nVidia GTX 750 Ti OS: openSUSE Tumbleweed |
||
|
spRocket
Senior Cruncher Joined: Mar 25, 2020 Post Count: 274 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() |
I saw the GPU tasks drying up last night as I went to bed, after transfers started working better, but didn't feel like worrying about it. Checking my UPS power draw graph, it looks like I ran out around midnight Chicago time and started getting them again 3:30ish. I'm seeing my usual 30-60 work units of all types in the queue (.1 day for both queue settings).
The cruncher is happily crunching away, and the transfers are flowing smoothly. Conjecture: they temporarily turned off the supply to clear up a jam? |
||
|
|
![]() |