| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 5
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I've just noticed that a GFAM WU downloaded on 14 March (at 10.40 UTC) has a 17 April deadline (at 11.22 UTC) as shown in BOINC Manager. That's 34 days
. In Results Status it shows as Time Due 19/04/12 - I understand the reason for that extra 2 days (it's been discussed as part of the server software update). However, the over-long deadline seems wrong.What's more, it's now running (at normal priority), along with other WUs downloaded at around the same time. It's as if BOINC knows that the deadline should be 24 March (10 days) and that this WU is next in line as per normal. So what's up with the display of an April deadline?? Workunit Status |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Apart from the dubious date, the deadline is not of primary importance when there's no panic perceived ... no tasks queued that would not complete in time. Till that state occurs, BOINC processes on a per-grid basis in FIFO order i.e. the order in which tasks were received, not based on their deadline.
--/7-- |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thanks, Rob, not sure I knew that
No panic here. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The WU ended up Valid as I expected, so it was just the initial 34 day deadline that was strange. Could this be an oddity from the server update?
GFAM_ x1cjbA_ pfHGPRTase_ noPOP_ 0011530_ 0065_ 0-- 611 Valid 14/03/12 10:40:54 16/03/12 00:35:04 6.36 199.9 / 197.3 |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The server knows the compute capacity of all devices that have returned work before. There were reports of funky FPOPS in task headers. The server is programmed to give extra time when the amount of estimated computing is much larger than that compute capacity, but then I would have expected that on you client there would also have been a very high Time to Compute estimate. Also, maybe your client had a momentary lapse in the on_frac/active_frac controls which made the deadline to go out way long. Buggy I'd think as 34 days would serious batch congestion... when they can be moved off and returned to the scientists.
Nothing to worry about.... just let them run / let the client deal with the scheduling. --//-- |
||
|
|
|