Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 781
Posts: 781   Pages: 79   [ Previous Page | 62 63 64 65 66 67 68 69 70 71 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 539219 times and has 780 replies Next Thread
wildhagen
Veteran Cruncher
The Netherlands
Joined: Jun 5, 2009
Post Count: 830
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I get a few workunits for GPU sometimes, but only 1 or 2 at a time. Almost all are reruns (_2), with some _1's in between.
----------------------------------------
[Edit 1 times, last edit by wildhagen at May 3, 2021 7:02:17 AM]
[May 3, 2021 7:01:40 AM]   Link   Report threatening or abusive post: please login first  Go to top 
hnapel
Advanced Cruncher
Netherlands
Joined: Nov 17, 2004
Post Count: 82
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

My slowest machine is munching the last GPU jobs from its cache, is it over? In that case it was epic and good as long as it lasted.
[May 3, 2021 8:05:06 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Richard Haselgrove
Senior Cruncher
United Kingdom
Joined: Feb 19, 2021
Post Count: 360
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Got an error on a 'Server abort':
03/05/2021 09:05:10 | World Community Grid | [cpu_sched] Preempting OPNG_0022502_00155_2 (removed from memory)
03/05/2021 09:07:14 | World Community Grid | [sched_op] handle_scheduler_reply(): got ack for task OPNG_0022502_00155_2
03/05/2021 09:07:14 | World Community Grid | [error] garbage_collect(); still have active task for acked result OPNG_0022502_00155_2; state 0
03/05/2021 09:07:15 | World Community Grid | Output file OPNG_0022502_00155_2_r1155594928_0 for task OPNG_0022502_00155_2 absent
03/05/2021 09:07:15 | World Community Grid | Output file OPNG_0022502_00155_2_r1155594928_1 for task OPNG_0022502_00155_2 absent
Task only ran for 10 seconds on a slow iGPU: it was well within the initial CPU setup phase when it received the abort. Does the setup phase handle BOINC API calls properly?
[May 3, 2021 8:17:11 AM]   Link   Report threatening or abusive post: please login first  Go to top 
biini
Senior Cruncher
Finland
Joined: Jan 25, 2007
Post Count: 334
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

GPU WUs started to come in again
----------------------------------------

rtx, xeon, i9, ryzen, rnd laptops
dAM0NES 1991 ppl interested in beer, amigas or electornic music
[May 3, 2021 8:33:12 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Grumpy Swede
Master Cruncher
Svíþjóð
Joined: Apr 10, 2020
Post Count: 2165
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

And just now, I got a refill of GPU tasks, and the whole WCG web site, as well as the BOINC part of WCG just drowned in molasses.

Backoffs, and slow downloads are with us again of course.
----------------------------------------
[Edit 1 times, last edit by Grumpy Swede at May 3, 2021 8:34:25 AM]
[May 3, 2021 8:33:18 AM]   Link   Report threatening or abusive post: please login first  Go to top 
bozz4science
Advanced Cruncher
Germany
Joined: May 3, 2020
Post Count: 104
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

My cache has been empty for hours. No new work gets dispatched to my system unfortunately. Discovered this morning that my PC had a cache of nearly 1,000 OPN1 WUs, while being on a 1 + 0.1 day cache setting. Surely, I had to abort most of them, as my 8 core machine only can handle so much and I'd prefer to crunch OPN tasks on my GPUs anyway.

Hope that we'll see a flow of new GPU work soon.
----------------------------------------

AMD Ryzen 3700X @ 4.0 GHz / GTX1660S
Intel i5-4278U CPU @ 2.60GHz
----------------------------------------
[Edit 1 times, last edit by bozz4science at May 3, 2021 8:56:17 AM]
[May 3, 2021 8:43:24 AM]   Link   Report threatening or abusive post: please login first  Go to top 
hnapel
Advanced Cruncher
Netherlands
Joined: Nov 17, 2004
Post Count: 82
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

They really need to start feeding new GPU batches more slowly in a ramp-up fashion, now that there are new jobs they will not properly download.
[May 3, 2021 8:43:49 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Skivelitis2
Advanced Cruncher
USA
Joined: Mar 21, 2015
Post Count: 113
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Let's remember....this is a stress test.
----------------------------------------

[May 3, 2021 9:12:53 AM]   Link   Report threatening or abusive post: please login first  Go to top 
tux93
Cruncher
Germany
Joined: Jan 5, 2012
Post Count: 9
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

Let's remember....this is a stress test.

Reading this thread I'm not sure who's stressed more:
The infra or the volunteers xD
----------------------------------------


Primary: Intel i7-4790 + nVidia GTX 1060
Secondary: Intel i7-2600 + nVidia GTX 750 Ti
OS: openSUSE Tumbleweed
[May 3, 2021 9:24:20 AM]   Link   Report threatening or abusive post: please login first  Go to top 
spRocket
Senior Cruncher
Joined: Mar 25, 2020
Post Count: 274
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: OpenPandemics - GPU Stress Test

I saw the GPU tasks drying up last night as I went to bed, after transfers started working better, but didn't feel like worrying about it. Checking my UPS power draw graph, it looks like I ran out around midnight Chicago time and started getting them again 3:30ish. I'm seeing my usual 30-60 work units of all types in the queue (.1 day for both queue settings).

The cruncher is happily crunching away, and the transfers are flowing smoothly.

Conjecture: they temporarily turned off the supply to clear up a jam?
[May 3, 2021 11:16:41 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 781   Pages: 79   [ Previous Page | 62 63 64 65 66 67 68 69 70 71 | Next Page ]
[ Jump to Last Post ]
Post new Thread